How to Run Serverless Containers AWS EKS with using Fargate
This blog will discuss running serverless containers in AWS EKS with Fargate. Why and how we can use this configuration and provides a working example of how to use AWS EKS with Fargate.
Recently, a customer reached out with an interesting request. They wanted us to run containers in serverless mode with AWS EKS. Their intention was to use Kubernetes features and run containers in serverless mode.
Side note: in EKS you should manage NodeGroups and pay for it. After a bit of research, we discovered that we can leverage Fargate to run containers on EKS.
In this blog, we’ll explore how Fargate can be used with EKS to make deploying and running your Kubernetes apps on AWS easier than ever. Before we get into how to deploy and run your apps in AWS, let’s cover a few fundamentals about the technology you’ll use.
Table of contents
- Introduction to AWS EKS
- Introduction to AWS Fargate
- Setup EKS with Fargate profile
- Deploy ALB Ingress
- Deploy Sample App
What is Elastic Kubernetes Service (EKS)
Elastic Kubernetes Service is the managed Kubernetes service offering from AWS. It automates administrative tasks, like deployment of the K8s control plane, updating management, patching, node provisioning, and more. This allows customers to focus on packaging and deploying their applications to the cluster.
What Is AWS Fargate?
AWS Fargate enables customers to deploy containers without the need to or challenge of creating and managing servers. It’s also flexible enough to be integrated with both ECS and EKS to efficiently run workloads. This approach is especially cost-effective for businesses. Waste is eliminated as you’re only utilizing compute resources needed to run containers created, saving you from overconsumption.
And this is considerably valuable for businesses that are planning to scale their requirements. Costs can also be lowered through Fargate Spot and compute savings plans. In fact, you can realize up to 70% savings for workloads tolerant to interruptions, and up to 50% for persistent workloads.
It’s worth noting that operational overhead is eliminated, too. This is possible as all backend infrastructure for hosting the containers is created and maintained. You don’t have to worry about ensuring up-to-date patches are installed – it’s all taken care of, as and when required by Fargate. This also means that typically essential infrastructure maintenance activities, like scaling, installing patching, and securing your environment all fall away.
Security is also a common concern. The good news is that all pods deployed in Fargate are secure. Run in isolated runtime environments, they are also free from resource sharing. And for observability, Amazon’s CloudWatch Container Insights solution comes as an out-of-the-box application that provides runtime metrics and logs.
Run Serverless Pods Using AWS Fargate and EKS
Something we appreciate about AWS is its ability to accommodate EC2 and Fargate. You’ll be able to establish a serverless data plane for certain use cases and constantly run K8s worker nodes when an application requires resources in a hurry.
If this functionality appears complex, and you’re wondering how it’s at all possible to keep them separate, this is where defining Fargate profiles becomes essential. Fargate profiles allow for a Fargate-AWS infrastructure communication link, and they assign a scheduler (Fargate or EKS).
Serverless workloads utilize a Kubernetes’s defined namespace in a Fargate profile. Fargate profiles then give an engineer the ability to declare which pods will run on Fargate by using a profile’s selectors. AWS then allows you to work with as many as 5 selectors that contain a namespace. Pods matching a selector (by matching the namespace) are then easily scheduled on Fargate.
Before we dive into our tutorial, here are a few prerequisite tools you’ll need:
- AWS CLI – CLI tool for working with AWS services, including Amazon EKS
- kubectl – CLI tool for working with Kubernetes Clusters
- eksctl – CLI tool for working with EKS clusters
You will also need to have a relevant IAM User with programmatic access and relevant role to create EKS, IAM Policy, and Role.
In this case we deployed a webapp, so checking if the app is actually running is quite simple. All you need to do is log on via the URL of the ELB which is balancing the traffic to our application’s deployment. In the image below, you can clearly see that we can reach the nginx demo page bundled along with a vanilla deployment of the pods.
It’s possible to easily extend the EKS cluster by using Fargate where your compute resources are created on demand for deployments. This approach is a prudent, but also a highly-efficient way of establishing the optimal compute and costs for running your containerized workloads. Using pods in Fargate offers more security as they run in isolated environments and don’t share resources at all.
In addition, deploying serverless K8 pods in AWS EKS and Fargate allows a cost saving. Any sudden capacity requirements don’t have the expense typically associated with backend nodes, and it is possible to run test and development scenarios that allow the quick setup of test environments that are easily decommissioned, all with minimal overhead.
Co-Author: Tal Knopf, VP of Engineering