EKS change ownership of cluster.
Issue:
Your current user or role does not have access to Kubernetes objects on this EKS cluster
This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.
Root Cause:
When an EKS cluster created by an IAM user, the auth config map only recorded by that specific IAM user name. In such kind of scenario any other IAM user will not be able, read, write cluster properties even the account administrator.
Best practice:
AWS EKS best practice is the using separate IAM user/role to create or operate clusters to avoid such kind of scenarios.
Fix:
Please note that, you should have AWS administrator access to perform this.
Step 1: Get cluster creation role.
Either, we should know the cluster creation role or it should be retrieved by aws internal-control-plane. But none of AWS user do not have permission for this, hence, we have to get AWS technical support for this action.
Step 2: Generate new access key for cluster owner.
Please note that, this can be done only if cluster owner still exist in AWS IAM accounts,
Otherwise, create a new IAM user with the same name and proceed to next steps.
Goto AWS IAM service and select the cluster owner. Then select “Security Credentials tab“ below over there create new access key.
Step 3: Define cluster owner access ID to aws cli.
Edit ~/.aws/credentials file and include cluster owner programmatic access credentials over there.
First, backup your credentials as another profile and add cluster owner credentials as default.
Step 4: Verify active user.
aws sts get-caller-identity
Output should be like this, you should get the cluster owner arn in the ARN field :
{
"UserId": "AIDAQDUJLDNY5HP6BVNL5",
"Account": "007804230513",
"Arn": "arn:aws:iam::007804230513:user/kavishka"
}
Step 5: Change to kubeconfig to related cluster.
You need to update cluster config to the related cluster. In order to do that change name and region in following command.
aws eks update-kubeconfig --name bahasanlp --region ap-southeast-1
Step 6: Verify the cluster and nodes.
kubectl get nodes
Sample output:
NAME STATUS ROLES AGE VERSION
ip-192-168-17-164.ap-southeast-1.compute.internal Ready <none> 28d v1.19.6-eks-49a6c0
ip-192-168-182-188.ap-southeast-1.compute.internal Ready <none> 76d v1.19.6-eks-49a6c0
Step 7: Change the owner to the current user.
In following command define the cluster name, region and user arn of new user accordingly and execute the command.
eksctl create iamidentitymapping --cluster bahasanlp --region=ap-southeast-1 --arn arn:aws:iam::007804230513:user/kavishka --group system:masters --username admin
Step 8: Revert back aws auth config.
Edit ~/.aws/credentials file again and remove previous cluster owner credentials.
Step 9: Verify active user.
aws sts get-caller-identity
Output should be like this, you should get the cluster owner arn in the ARN field :
{
"UserId": "AIDAQDUJLDNY5HP6BVNL5",
"Account": "007804230513",
"Arn": "arn:aws:iam::007804230513:user/kavishka"
}
Step 6: Verify the access cluster and nodes.
kubectl describe cm aws-auth -n kube-system
If you have change the cluster owner successfully, the output must be like following,
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
mapRoles:
groups:
system:bootstrappers
system:nodes
rolearn: arn:aws:iam::007804230513:role/eksctl-airflow-nodegroup-workers-NodeInstanceRole-74NMVN4R9NC2
username: system:node:{{EC2PrivateDNSName}}
mapUsers:
groups:
system:masters
userarn: arn:aws:iam::007804230513:user/kavishka
username: admin
Events: <none>
This comment has been removed by a blog administrator.
ReplyDelete