Migrating from aws-auth identity mapping
Customers who already use Amazon EKS may be familiar with the aws-auth
ConfigMap mechanism for managing IAM principal access to clusters. This section demonstrates how to migrate entries from this older mechanism to using cluster access entries.
An IAM role eks-workshop-admins
has been pre-configured in the EKS cluster that is used for a group with EKS administrative permissions. Let's check the aws-auth
ConfigMap:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::1234567890:role/eksctl-eks-workshop-nodegroup-defa-NodeInstanceRole-acgt4WAVfXAA
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::1234567890:role/eks-workshop-admins
username: cluster-admin
mapUsers: |
[]
kind: ConfigMap
metadata:
creationTimestamp: "2024-05-09T15:21:57Z"
name: aws-auth
namespace: kube-system
resourceVersion: "5186190"
uid: 2a1f9dc7-e32d-44e5-93b3-e5cf7790d95e
Let's impersonate this IAM role to check its access:
We should be able to list any pods, for example:
NAME READY STATUS RESTARTS AGE
carts-6d4478747c-vvzhm 1/1 Running 0 5m54s
carts-dynamodb-d9f9f48b-k5v99 1/1 Running 0 15d
Now let's delete the aws-auth
ConfigMap entry for this IAM role. We'll use eksctl
for convenience:
If we try the same command as before, we'll now be denied access:
error: You must be logged in to the server (Unauthorized)
Let's add an access entry to enable the cluster admins to access the cluster again:
Next, we'll associate an access policy for this principal using the AmazonEKSClusterAdminPolicy
policy:
Now we can test that access is working again:
NAME READY STATUS RESTARTS AGE
carts-6d4478747c-vvzhm 1/1 Running 0 5m54s
carts-dynamodb-d9f9f48b-k5v99 1/1 Running 0 15d
By following these steps, we've successfully migrated an IAM role from the aws-auth
ConfigMap to using the newer Cluster Access Management API, which provides a more streamlined way to manage access to your Amazon EKS clusters.