Serving a Node.js API on Amazon Elastic Kubernetes Service and CloudFront | by Alex Colb | Mar, 2022

In this demo, we set up a Node.js Express server in an Amazon EKS Kubernetes cluster, and serve it through CloudFront

Kubernetes is a well-established container orchestration framework that enables open-source deployment, scaling, and management of containerized applications. While it may be too robust for the simplest of applications, Kubernetes does present an outstanding standard of uptime and reliability, eg. by enabling smooth, rolling updates to applications.

Elastic Kubernetes Service (EKS) is a service managed by AWS, which takes some of the infamous complexity out of managing a Kubernetes deployment. CloudFront, in turn, can be used to cache the responses coming out of the deployment so as to keep computation costs at bay.

In this demo, we’ll set up a simple Express server in a Kubernetes cluster, and serve it through CloudFront. Before we start, our local machine should have the following tooling installed:

We also assume we have control over a domain name, which the application can be served from. The demo application and the associated configuration files can be found here:

Preparing the Docker Image

Kubernetes is a container orchestration framework, which means that we need to create and host a Docker image for our app. We’ll start by creating a public repository on Docker Hub (eg. my-docker-username/my-app).

Having defined a Dockerfile in our project root, we run:

$ docker login
$ docker build -t my-docker-username/my-app .
$ docker push my-docker-username/my-app

The docker image we created is now publicly available in Docker Hub:

Creating Kubernetes’ ConfigMap YAML configurations

We next create, get familiar with and modify at least the TODO-annotated values ​​in the following three YAML files, which tell Kubernetes what we want our cluster to look like.

api.deployment.yaml (GitHub)

api.service.yaml (GitHub)

api.external-dns.yaml (GitHub)

This is a longer, less standard type of ConfigMap, so just copy it from the provided git repository. However, make sure to edit the domain in --domain-filter to match your application’s domain. The significance of this file will become more clear later on.

Creating our EKS cluster

Before continuing, we need to have our AWS credentials configured. Then, to create the cluster, we wait a while for the following command to complete:

$ eksctl create cluster --name my-cluster --region eu-west-1 --nodegroup-name linux-nodes --node-type t2.small --nodes 1

We can then change our namespace and apply two of our YAML configurations as follows. Among other things, this will deploy our Docker image onto the cluster.

$ kubectl config set-context --current --namespace=kube-system
$ kubectl apply -f api.deployment.yaml
$ kubectl apply -f api.service.yaml
$ kubectl get pods --watch

The last command lets us observe our two pods being created and hopefully ending up in the Running state. Should you need to debug a failing pod, these commands will useful:

$ kubectl describe pod/my-pod-name
$ kubectl logs pod/my-pod-name

If you observe an “exec format error” and your machine is running on Apple Silicon, you may need to create your Docker images elsewhere.

To check our progress so far, we can visit the ephemeral URI that our cluster has opened up to the world. In other words, after DNS information has taken some time to prop, we can observe our API in our browser! To that end, let’s use this command to get our EXTERNAL-IP and PORT:

$ kubectl get service

Configuring DNS access to the cluster

The problem we now face is that this URI will change whenever our service is updated, so we can’t use it as-is for inbound traffic. Instead, we’ll leverage external-dns to make our service discoverable by public DNS.

First, we create a new hosted zone in AWS Route 53 named note of its ID.

We then set up a service account authorizing our cluster to update the ephemeral URI of our application to DNS. First, we create the following JSON policy in AWS IAM, taking note of its ARN:

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": [
"Resource": [
"Effect": "Allow",
"Action": [
"Resource": [

We’re careful not to give Kubernetes access to the production-facing hosted zone. Even if we decide to use a subdomain of our actual domain, we should route the external-dns traffic to a standalone hosted zone.

We can then create the service account, and verify its attachment to our cluster:

$ eksctl utils associate-iam-oidc-provider --region=eu-west-1 --cluster=my-cluster --approve$ eksctl create iamserviceaccount 
--name external-dns
--namespace kube-system
--cluster my-cluster
--attach-policy-arn my-iam-policy-arn
$ kubectl describe sa external-dns

Finally, we deploy the external-dns Kubernetes pod, which in turn will dynamically update the Route 53 records to point to our ephemeral URL:

$ kubectl apply -f api.external-dns.yaml

We can verify this by visiting our new static address, eg.

Pointing CloudFront to EKS

We now want to put our cluster behind a load balancer, so that API responses won’t have to be re-computed every time they are requested. In AWS CloudFront, we create a new distribution. The Origin domain should be whatever record external-dns stored in Route 53, eg. the value of HTTP port should be whatever was configured in the ConfigMaps, eg. 8080. We’ll also have to create a custom Cache Policy with Query Strings set to Allwhich will tell CloudFront to include HTTP query strings when considering the caching of our endpoints.

Once, we can access the application via the URI listed under Distribution domain name in the CloudFront distribution!

Leave a Comment