In this guide, we’ll go through setting up your Docker app on AWS EKS using Fargate, along with pushing your Docker image to ECR, and setting up external access using an Application Load Balancer (ALB). Let’s dive in!
ECR Setup: Storing Your Docker Images
First, you’ll need to create a repository in ECR where your Docker images will live. After creating the repo in the AWS ECR console, follow these steps to build and push your Docker image:
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com/my-repo
docker build -t my-backend-app .
docker tag my-backend-app:latest <account-id>.dkr.ecr.<region>.amazonaws.com/my-repo
docker push <account-id>.dkr.ecr.<region>.amazonaws.com/my-repo
Ensure your IAM user has the necessary ECR permissions to avoid any access issues.
Check in the following path Amazon ECR -> Private Registry -> <Repository Name> -> Permissions to ensure necessary permissions are given to the IAM user assuming the image needs to be uploaded in a private repository.
Creating the EKS Cluster
Next, we create an EKS cluster with default add-ons after creating the private subnets for the required availability zones. We need CoreDNS for private name resolution. A public subnet and EC2 instances are needed to run CoreDNS pods. Add a Node Group in Compute tab of Cluster. Ensure that CoreDNS and related pods are running in the Resources->Deployments and Resources->Pods tabs.
Now return to the CLI. Make sure the kubectl
context is set:
aws eks --region <region> update-kubeconfig --name <cluster-name>
Deploying Your Application with Fargate
Before you start deploying the application, you have to create VPC EndPoints so that EKS can communicate with ECR to fetch the images. The first two are interface endpoints and s3 endpoint is a gateway type.
Start by creating a namespace for your Fargate pods. Choose an appropriate name, since it is the identifier used in several places in the following steps. Here’s the namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: fargate-apps
Apply it using:
kubectl apply -f namespace.yaml
Now create a Fargate Profile in the Compute tab of EKS Cluster. Create the roles with necessary permissions while creating the Fargate Profile.
Don’t forget to create a secret to allow EKS to pull the Docker image from ECR:
aws ecr get-login-password --region <region> | kubectl create secret docker-registry regcred --docker-server=<account-id>.dkr.ecr.<region>.amazonaws.com --docker-username=AWS --docker-password=$(aws ecr get-login-password --region <region>) --namespace=fargate-apps
Make a note of the docker registry secret filename (regcred in the above) which is used in deployment file in the next step.
Next, deploy your app with the following deployment configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: fargate-apps
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-app
image: <account-id>.dkr.ecr.<region>.amazonaws.com/my-repo:latest
ports:
- containerPort: 6060
imagePullSecrets:
- name: regcred
containerPort is the port where your application is listening on. Now apply your deployment:
kubectl apply -f deployment.yaml
Now check the logs using the below command. You must be experiencing database connectivity issues which we’ll work on in the next section.
kubectl logs <pod-name>
MongoDB Connectivity
If a dedicated cluster is chosen for MongoDB in MongoDB Atlas, you can use VPC Peering or Private End Point to connect to AWS. But the steps below are for M0 cluster where VPC Peering or Private End Point is not available.
First get the MongoDB Connection string from the Connect button of the Cluster. It would look something like this depending on your stack. Add it to the environment variables of your application.
MONGOURI=mongodb+srv://<username>:<password>@cluster0.mongodb.net/my-db?retryWrites=true&w=majority
To connect your app to MongoDB Atlas, configure a public NAT Gateway in a public subnet, and whitelist NAT Gateway’s IP address in MongoDB Atlas. The public subnet must be associated with a routing table that routes traffic to Internet Gateway. In the Security Group of the Cluster appropriate entry should be added to permit traffic from NAT Gateway on port 27017.
Follow the steps given in the beginning to upload the image to ECR. Restart the pods with the below command.
kubectl rollout restart deployment <deployment-name> -n <namespace>
Setting Up an Application Load Balancer (ALB)
For external access, we’ll install the AWS Load Balancer Controller using Helm. Follow these steps:
Associate the IAM OIDC Provider for your EKS cluster:
eksctl utils associate-iam-oidc-provider --region <region> --cluster <cluster-name> --approve
Create the IAM Policy for the ALB Controller:
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
Create an IAM Service Account with the right permissions:
eksctl create iamserviceaccount \
--cluster <cluster-name> \
--namespace fargate-apps \
--name aws-load-balancer-controller \
--attach-policy-arn arn:aws:iam::<accountid>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
Install the Load Balancer Controller using Helm:
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n fargate-apps \
--set clusterName=<cluster-name> \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=<region> \
--set vpcId=<vpc-id>
Configuring Ingress and Service
Now, configure the ingress.yaml
to route traffic through your ALB:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-service
namespace: fargate-apps
annotations:
service.beta.kubernetes.io/aws-load-balancer-subnets: <your-subnets>
alb.ingress.kubernetes.io/load-balancer-arn: arn:aws:elasticloadbalancing:<region>:<account-id>:loadbalancer/app/<alb-name>
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
Before applying the Ingress, create the service.yaml
for your app and apply it:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: fargate-apps
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 6060
selector:
app: web-app
Once the Ingress is applied, you’ll have external access to your app via the ALB DNS name. Create listeners in ALB to route traffic from ports 80 and 443 to the Target Groups as required. The Target Groups should be IP address type to work with Fargate. IP Address can be obtained from the pod page. Once the ALB is set up ensure health checks are successful.
Conclusion
With these steps, your application is deployed on AWS EKS using Fargate, ECR, and ALB. You’ve set up external access, connected to MongoDB, and handled all the moving parts that make cloud-native apps scalable. If you run into any issues, check the logs:
kubectl logs <pod-name>
There are some steps which are not explained like the Security Group or Target Group configuration since they are implementation specific and also will bloat the post. To keep this post simple I have omitted minor details.
Happy deploying!