Deploying Your App to AWS Kubernetes with EKS: A Step-by-Step Guide

Lexis Solutions guides you through the process of containerizing and deploying your application on an Amazon Web Services Kubernetes cluster using Elastic Kubernetes Service.

  • AWS

A quick overview of Docker and Kubernetes

Before deploying our app to EKS, let's first quickly go over what Docker and Kubernetes are and how they help us deploy our applications.

Docker

Containerization is a technology that allows you to package your application and its dependencies into a single, portable unit called a container. Docker is the most popular containerization platform; it will enable you to encapsulate an application and its dependencies into a self-sufficient container, ensuring that it runs consistently across different environments, from a developer's PC to a production server. Docker containers are lightweight, fast to start, and easy to share, making them the ideal building blocks for modern applications. Containers ensure that what you develop and test locally behaves the same way in any environment where Docker is installed, eliminating the "but it was working on my machine" problem. All this helps speed up the development process, simplifies the deployment, and improves the overall reliability of applications.

Kubernetes

On the other hand, Kubernetes is an open-source container orchestration platform that automates container deployment, scaling, and management. With Kubernetes, you can define how your applications should run, ensuring high availability, fault tolerance, and efficient resource utilization.

A Kubernetes cluster comprises two fundamental components: the control plane (master node) and multiple worker nodes. The control plane oversees essential cluster-wide tasks like scheduling, scaling, and maintaining the desired state. On the other hand, the worker nodes serve as the execution engines, hosting pods—the smallest deployable units within Kubernetes. Each pod may contain one or more containers, sharing network and storage resources for efficient communication.

Some of the main features of Kubernetes include:

  • Orchestration: Kubernetes automates the deployment, scaling, and management of containerized applications, reducing manual work.
  • Load Balancing: It offers built-in load balancing for distributing network traffic to containers.
  • Auto-Scaling: Kubernetes can automatically scale the number of containers (pods) based on CPU or other metrics to handle changing workloads.
  • Self-Healing: It constantly monitors the state of your applications and automatically replaces containers if they fail.
  • Rolling Updates: Kubernetes supports rolling updates, allowing you to update your application without downtime or service interruption.
  • Resource Management: You can define and control resource limits and requests for CPU and memory, ensuring efficient resource utilization.

Prerequisites

Before continuing with this article, make sure you have the following:

  • An AWS account
  • Access to the EKS (Elastic Kubernetes Service) and ECR (Elastic Container Registry) services on AWS
  • The AWS CLI tool, we'll be using it to interact with the AWS services
  • kubectl, the Kubernetes CLI tool
  • eksctl, a CLI tool that will simplify our work with Amazon EKS, abstracting away the complexity of the AWS CLI
  • Docker installed on your machine. We'll use it to build our app image and push it to the container registry

Deploying the app

Initial project setup

For this example, we will deploy a simple express hello world app. This is the initial project structure that we'll be working with

hello-world-app ├── package.json ├── package-lock.json └── src └── index.js

If we run the app, it should start on port 3000.

$ node src/index.js App running on port 3000

And a GET http://localhost:3000 request should return "Hello World!"

$ curl http://localhost:3000 Hello World!

Containerizing the application

Before deploying our application to Amazon EKS, we need to build a Docker image and push it to Amazon's container registry. This registry serves as the centralized repository from which EKS retrieves the image, enabling the execution of our application in the cluster.

Building the Docker image

Let's create two new files in the root directory of our project: "Dockerfile" and ".dockerignore".

hello-world-app ├── package.json ├── package-lock.json └── src └── index.js ├── Dockerfile └── .dockerignore

Dockerfile is a script used to create a Docker image. It contains a series of instructions that describe how the image should be built. Here's the content of our Dockerfile:

FROM node:16 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["node", "src/index.js"]

Let's break it down line by line:

  • FROM node:16: This instruction specifies the base image for building this Docker image. In this case, it starts with a Node.js version 16 base image, which already includes Node.js and npm, making it suitable for Node.js applications.
  • WORKDIR /app: This instruction sets the working directory within the image to /app. This is where the rest of the commands will be executed.
  • COPY package*.json ./: Here, the COPY instruction copies the package.json and package-lock.json files from the host (the directory where the Dockerfile is located) to the /app directory within the image.
  • RUN npm install: You can use RUN to run any command during the build process. In this case, we use npm to install the dependencies specified in the package.json file.
  • COPY . .: This instruction copies all the files and directories from the host into the /app directory in the image.
  • EXPOSE 3000: The EXPOSE instruction informs Docker that the container will listen on port 3000 when it runs.
  • CMD ["node", "src/index.js"]: The CMD instruction specifies the default command to run when a container based on this image is started. In this case, it runs the Node.js application by executing node src/index.js within the container, starting our app.

The .dockerignore file tells Docker what files should be ignored during the build. In this case, we can add node_modules so it doesn't get copied into the docker image. While not required, this helps us keep the image size small and improve the build time.

Now that we've created the Dockerfile, we can build the image using the docker build command:

$ docker build -t hello-world-app .

Using the -t flag, we've specified what we want to name our image. The dot at the end tells Docker to look for the Dockerfile in the current directory. If it's somewhere else, or you named your script file differently, you can use the -f flag to pass Docker the path to the file. Once the command has been executed, you can run docker images to get a list of available images. The output should look something like this:

REPOSITORY TAG IMAGE ID CREATED SIZE TAG hello-world-app ec5cdedc65fc 26 seconds ago 861MB latest

Pushing the image to ECR

Now that we've built the image, it's time to create a repository in ECR and push it there.

To create the repository, run the following command:

$ aws ecr create-repository --repository-name hello-world-app

You should see output similar to this:

{ "repository": { "repositoryArn": "arn:aws:ecr:eu-central-1:721145219880:repository/hello-world-app", "registryId": "721145219880", "repositoryName": "hello-world-app", "repositoryUri": "721145219880.dkr.ecr.eu-central-1.amazonaws.com/hello-world-app", "createdAt": "2023-10-05T13:26:15+03:00", "imageTagMutability": "MUTABLE", "imageScanningConfiguration": { "scanOnPush": false }, "encryptionConfiguration": { "encryptionType": "AES256" } } }

Take note of the repositoryUri value, which we'll use later to push our image to the repository and deploy it to the Kubernetes cluster. Still, before that, since this is a private repository, we need to authenticate Docker first. To do this, you can run the following command with your region and account ID:

$ aws ecr get-login-password --region [region] | docker login --username AWS --password-stdin [aws_account_id].dkr.ecr.[region].amazonaws.com

You will get a "Login Succeeded" message in the console if it succeeds.

Now that we've authenticated Docker, we can push our image to the repository. We must tag it with a specific value using the repositoryUri: [repositoryUri]:[tag]. The tag value here can be anything: an image version, "latest", or anything else that makes sense in your case. Then, you can run docker push with the same value you tagged the image with:

$ docker tag hello-world-app 721145219880.dkr.ecr.eu-central-1.amazonaws.com/hello-world-app:latest
$ docker push 721145219880.dkr.ecr.eu-central-1.amazonaws.com/hello-world-app:latest

Once the command executes, you can confirm that the image has been pushed to ECR by running:

$ aws ecr list-images --repository-name hello-world-app
{ "imageIds": [ { "imageDigest": "sha256:511a3e2fef5790fca4ac7b03fec6dba0b5339c4acea8a74b7f1ea3fc16f5904f", "imageTag": "latest" } ] }

With this, we've successfully built our image and pushed it to ECR. Now, it's time to deploy it to EKS.

Deploying the app to EKS

First, we'll need to create a new cluster in EKS:

$ eksctl create cluster --name hello-world-app --region eu-central-1

Creating a new cluster might take a while. Once the command finishes executing, you should see in the terminal that it has created a new config file. In my case, it was at ~/.kube/config. This file contains the credentials necessary for kubectl to access our cluster.

We will create two new files to deploy the app to the Kubernetes cluster: k8s/deployment.yaml and k8s/service.yaml.

hello-world-app ├── k8s ├── deployment.yaml └── service.yaml ├── package.json ├── package-lock.json └── src └── index.js ├── Dockerfile └── .dockerignore

The deployment.yaml file describes how the container should be run within the cluster. Here, we can specify the number of replicas, docker image, env variables, etc. On the other hand, the service.yaml file allows us to connect our app to a network. Let's examine the files' contents and explain them in more detail.

First the deployment.yaml file:

apiVersion: apps/v1 kind: Deployment metadata: name: hello-world-deployment spec: replicas: 2 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world-container image: 721145219880.dkr.ecr.eu-central-1.amazonaws.com/hello-world-app:latest ports: - containerPort: 3000
  • The apiVersion and kind fields are used to specify the kind of Kubernetes resource that's being defined. In this case it's deployment, which is used for deploying containerized applications. Still, many other resource types exist, like ConfigMap for storage configurations or PersistanceVolume for storage.

  • The metadata section contains metadata about the deployment.

  • The spec section contains information about the deployment itself:

    • replicas specifies the number of running instances of your application.
    • selector defines what selector we can use later on to identify pods managed by this deployment
    • template contains the template of the pods. Here, we've configured the container, for example, what image should be used and what port is exposed. Here, you can also set other things, such as env variables, CPU and memory limits, data volumes for the container, and so on.

    And here's the service.yaml file:

apiVersion: v1 kind: Service metadata: name: hello-world-service spec: selector: app: hello-world ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer

The service.yaml file will allow our deployment to connect to a network. In this case, the service will listen to port 80 and route the traffic to the container on port 3000.

One of the more important fields in this resource type is the type field, which decides who can access the app. There are four possible values:

  • ClusterIP - the deployment won't be publicly accessible. It can only be accessed by other apps within the Kubernetes cluster.
  • NodePort - this service type will open a port that we specify on all nodes in the cluster, making the deployment publicly available
  • LoadBalancer - This is another way to expose the deployment to public access. Still, it only works if you use Kubernetes with a cloud provider that supports it (AWS, in our case). It will create a load balancer service to route the traffic to our pods.
  • ExternalName - this will map our service to a DNS name. We can specify the name with the spec.externalName field.

And now, we're ready to deploy the app to the cluster by calling kubectl apply to apply the changes that we defined earlier to our cluster:

$ kubectl apply -f k8s/deployment.yaml deployment.apps/hello-world-deployment created
$ kubectl apply -f k8s/service.yaml service/hello-world-service created

Once this is done, we can check whether everything has been appropriately deployed by running:

$ kubectl get all NAME READY STATUS RESTARTS AGE pod/hello-world-deployment-574fbf949b-5xcrf 1/1 Running 0 4m pod/hello-world-deployment-574fbf949b-bfghm 1/1 Running 0 4m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/hello-world-service LoadBalancer 10.100.159.38 a2c68e692ec944c48a22d7a8b10aff98-2831416.eu-central-1.elb.amazonaws.com 80:30955/TCP 4m service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 17m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/hello-world-deployment 2/2 2 2 4m NAME DESIRED CURRENT READY AGE replicaset.apps/hello-world-deployment-574fbf949b 2 2 2 4m

Here, we can see all resources deployed to the cluster, including our two instances and the service. For service/hello-world-service, we can see it has been assigned an external IP address. Using that, we can test our app:

$ curl a2c68e692ec944c48a22d7a8b10aff98-2831416.eu-central-1.elb.amazonaws.com Hello World!

As we can see, everything is working correctly. With this, we've finished our deployment.

You can find the complete project from this article here: https://github.com/lexis-solutions/kubernetes-demo.

Conclusion

This article covers the basics of Docker and Kubernetes, essential components in modern application deployment. However, it's important to note that Docker and Kubernetes are vast and intricate topics with numerous advanced features and capabilities waiting to be explored. While we've provided a solid foundation to get you started, there's much more to discover and master in these powerful technologies.

Build your digital solutions with expert help

Share your challenge with our team, who will work with you to deliver a revolutionary digital product.