Secure network communication of EKS Fargate pods via AWS Security Group
Security Group for EKS pods was introduced back in Sept 2020. Until then if you were using EC2 instances for running your workload you could make use of network policy to control the communication among the pods but there wasn’t any such option available for Fargate. Even on the day of writing this article network policy isn’t supported for Fargate so we use security groups to control communication among the pods and from pods to other resources within the network.
Note: Attaching security groups to pods running on EKS Fargate isn’t supported for all cluster versions. Please read this doc to understand the limitations.
Alright! Enough of talking. Let’s get to the fun part and get our hands dirty.
To follow along you will need an EKS Cluster and a RDS Postgres instance. We will be deploying two pods on EKS cluster using Fargate compute type where one will be able to access the database while other won’t. We will control this behaviour using Security Group.
Security Group
Let’s start with creating a security group without any inbound and outbound rules. This security group will be attached to one of pods that we will be creating in the next few minutes.
aws ec2 create-security-group --group-name "pod-sg" --description "Security group for eks pod" --vpc-id "vpc-xxxxxxxxxx"
Note: Replace vpc-xxxxxxxxxx with the actual VPC ID
Fargate Profile
We need two Fargate profiles as we will be launching both the pods in a separate namespace.
First Fargate Profile:
aws eks create-fargate-profile --fargate-profile-name "sg-pod" --cluster-name "EKS_CLUSTER_NAME" --pod-execution-role-arn "EKS_FARGATE_POD_EXECUTION_ROLE_ARN" --subnets "PRIVATE_SUBNET_1" "PRIVATE_SUBNET_2" --selectors "namespace=sg"
Second Fargate Profile:
aws eks create-fargate-profile --fargate-profile-name "nsg-pod" --cluster-name "EKS_CLUSTER_NAME" --pod-execution-role-arn "EKS_FARGATE_POD_EXECUTION_ROLE_ARN" --subnets "PRIVATE_SUBNET_1" "PRIVATE_SUBNET_2" --selectors "namespace=nsg"
Note: When creating the above profiles please replace EKS_CLUSTER_NAME with the actual cluster name, EKS_FARGATE_POD_EXECUTION_ROLE_ARN with Fargate pod execution role ARN and PRIVATE_SUBNET_1 and PRIVATE_SUBNET_2 with the actual subnet IDs
Kubernetes Namespace
As mentioned earlier, we need to create two namespaces in our cluster.
# Pods in this namespace will have a security group attached
kubectl create ns sg
# Pods in this namespace will NOT have a security group attached
kubectl create ns nsg
Security Group Policy
If you are running the supported cluster version, your cluster will already have a Security Group Policy CRD which is required to attach security group(s) to the pod ENI.
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: pod-sg-policy
namespace: sg
spec:
podSelector: {}
securityGroups:
groupIds:
- sg-06c2d8f6877a7662e # Pod security group we created earlier
- sg-xxxxxxxxxxxxxxxxx # Cluster security group
Note: When using security group with Fargate pod it is necessary that pod can talk to EKS control plane hence we attach cluster security group as well
Note: Leaving the podSelector blank as done in the above example attaches the security group(s) to all the pods launched in that namespace. You can also attach security group(s) to the pod within the namespace based on label(s) associated with it this way:podSelector: matchLabels: app: postgres
Launching Pods
It’s finally time to launch both the pods where one will have the security group attached whereas the other won’t.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: sg
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
env:
- name: POSTGRES_PASSWORD
value: postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: nsg
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
env:
- name: POSTGRES_PASSWORD
value: postgres
Updating RDS Security Group
Before we test the connectivity between our pod and RDS instance we need to make sure that our RDS instance security group allows inbound connection only from the pod security group
Testing Connectivity
We have finally reached the stage where we will check if we are able to connect to RDS Postgres instance or not from the pod that has security group attached and no connectivity between RDS and the pod that does not have security group attached.
So, let’s exec into the pod that does not have the security group attached:
kubectl exec -it -n nsg POD_NAME -- bash
Once we are in, run the following command to test connectivity to RDS instance:
PGCONNECT_TIMEOUT=10 psql -h RDS_ENDPOINT -U DB_USER -d DB_NAME
Yupiee! My pod within the nsg namespace gives timeout after 10 seconds. What about you?
Now, let’s exec into the pod that has the security group attached:
kubectl exec -it -n sg POD_NAME -- bash
We run the same command that we ran earlier to test connectivity with RDS Postgres instance:
PGCONNECT_TIMEOUT=10 psql -h RDS_ENDPOINT -U DB_USER -d DB_NAME
Voila! I’m able to connect to RDS Postgres via the pod that has security group attached. Believe you are able to connect as well.
Covering the basics
-
EKS (Elastic Kubernetes Service) is a secure and fully managed Kubernetes Control Plane provided by AWS. Just add nodes or use Fargate to run your K8s workload like deployment, pod, stateful set, ingress, etc.
-
Fargate is a serverless offering by AWS which you combine with EKS to run your Kubernetes workload without managing the worker nodes. To deploy a pod on Fargate you need to create a Fargate profile and associate it with the EKS cluster. The Fargate profile is responsible for launching the node at runtime for running your pod.
-
Fargate is a serverless offering that manages spinning up or terminating worker nodes for running your workload on ECS or EKS whereas EKS (Elastic Kubernetes Service) gives you a secured and fully managed Kubernetes Control Plane to run K8s based workload.