Say you have a setup with EKS using IAM API keys with Admin permissions and are using an AWS profile that you’ve confirmed can retrieve the eks config with aws eks update-kubeconfig --name context-name --region appropriate-region-name-n.
But then kubectl get pods --context context-name still fails, oddly with a “You must be logged in to the server (Unauthorized)”
Similarly, kubectl describe configmap -n kube-system aws-auth fails with the same message.
Is your username and/or role included/spelled correctly?
If you have access to all of the resources used by EKS then perhaps the ConfigMap is the issue. Check out How do I resolve an unauthorized server error when I connect to the Amazon EKS API server? for more details, and presumably the “You’re not the cluster creator” section.
Debugging steps:
- Have the cluster creator or member of the system:mastersgroup runkubectl describe configmap -n kube-system aws-authand verify that themapUsersormapRolessection mention the identity with adequate permissions as returned byaws sts get-caller-identity
- Double check that if the identity/roles are there that they have adequate permissions.
- Double check that the identity/role ARN and username actually match. (This was a relatively simple setup, so in my case it was just a misspelling of the username that was the cause.)