Authenticate to Kubernetes API server running on AWS using IAM role

While working for one of the biggest consulting firm I was assigned a task to setup a deployment pipeline between the self-hosted CI runners and the EKS cluster. Initially, I used Service Account Token method for the authentication part but later I realised that it isn’t a secured way because it requires the long-lived service account token to be hard-coded in the kube config file so I started reading about other ways to of authentication and that’s when I learnt about a better approach to authenticate the requests to the K8s APIs server using the IAM role that does not require any secrets to be hard coded in the kube config file.

Prerequisites:

  • EKS Cluster

  • EC2 Instance - acting as a dummy self-hosted runner. Packages you will need on your instance: awscli and kubectl

Note: Make sure the EC2 instance has a role attached that allows it call DescribeCluster API on the EKS cluster and you will also need to update the EKS cluster security group to allow traffic on port 443 from the EC2 instance security group.

Fig 1. EKS Cluster

Fig 2. Dummy Runner

Fig 3. Dummy Runner IAM Role

Fig 4. Dummy Runner Inline Policy

We will start with creating an RBAC rule for restricting the permissions and this rule will be attached to a Kubernetes user. In the later part, we will associate this user with an IAM role.


Authorisation

Let’s create an RBAC rule in our Kubernetes cluster that can only list the namespaces within the EKS cluster.

cluster-role.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: demo
rules:
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["list"]

cluster-role-binding.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: demo
subjects:
  - kind: User
    name: demo
    namespace: kube-system
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: demo
  apiGroup: rbac.authorization.k8s.io
Note: The dummy self-hosted runner is not yet configured to interact with the EKS cluster so, you will have to create ClusterRole and ClusterRoleBinding using the kube config that uses the same credentials you used to create the EKS cluster earlier.

To create the ClusterRole and ClusterRoleBinding run the following commands:

kubectl apply -f cluster-role.yaml
kubectl apply -f cluster-role-binding.yaml

Next step is to configure the authentication part. Let’s keep going.


Authentication

In this step, we need to update the aws-auth ConfigMap to associate the IAM role attached to the self-hosted runner with the Kubernetes user that we created in the above step. To achieve that, we need to add a new block within the aws-auth config map in the kube-system namespace under mapRoles section. If the aws-auth config map does not exist in your cluster you can apply the below snippet as it is else just add the block containing the rolearn and the username in your existing config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::810010619448:role/dummy-runner
      username: demo

This completes the authentication part and it’s time for us to test all that we have done so far.

Alright. Let’s SSH into the EC2 instance either via SSM or via key pair depending on the EC2 configuration and generate the kube config file.

aws eks update-kubeconfig --name CLUSTER_NAME --alias CLUSTER_NAME

Fig 5. Generate kube config

Let’s take a look at the kubeconfig file and confirm that we don’t have any hard-coded secrets like service account token, etc mentioned under the users section.

cat ~/.kube/config

Fig 6. kubeconfig

If you remember, the ClusterRole we created earlier only allows listing of namespaces so any other action we try to perform should not go thorough. Let’s give it a try.

First, let’s try listing the namespaces.

kubectl get ns

Fig 7. Kubernetes Namespaces

Next, let’s try to list nodes, pods or anything else and we must receive unauthorised error for these requests.

kubectl get pods
kubectl get configmaps

Fig 8. Unauthorised Error

Cool! You have done a great job in securely setting up authentication between the self-hosted runner and Kubernetes API server hosted on EKS.


Covering the basics

  • kubectl uses the kube config file which is by default located in the user’s home directory at ~/.kube/config to authenticate with the Kubernetes API server. The kube config file contains a server section containing the details like endpoint and certificate to reach the server and a user section that contains details about which method to use for authenticating with the K8s API server.

  • Pods can authenticate with the API server via the service account token. By default, a pod is associated with a service account and its token is mounted within the pod at /var/run/secrets/kubernetes.io/serviceaccount/token. A certificate bundle might also be mounted at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt that can be used to verify the API server.

  • The Kubernetes API server is responsible for validating the API objects received for pods, deployments, configmaps, services, etc via REST operations and processes them.

Vimal Paliwal

Vim is a DevSecOps Practitioner with over eight years of professional experience. Over the years, he has architected and implemented full fledged solutions for clients using AWS, K8s, Terraform, Python, Shell, Prometheus, etc keeping security as an utmost priority. Along with this, during his journey as an AWS Authorised Instructor he has trained thousands of professionals ranging from startups to fortune companies.

Previous
Previous

Disaster Recovery for AWS CloudHSM

Next
Next

Exposing SFTP server differently