Back to 2b site

Kubernetes is not the only one. Overview of AWS ECS

 This two-part series will look at AWS ECS as an alternative to Kubernetes. In part one I will discuss:

  1. Microservice Architecture Overview.
  2. Intro to AWS ECS.
  3. Creating an ECS Cluster via AWS UI.
  4. Creating and deploying an application using an example Nginx via AWS UI.

 

In part two, I will cover:

  1. Deploying an ECS cluster using terraform code.
  2. Creating and deploying an application via terraform code.
  3. Sending application logs via fluentd to AWS Elasticsearch.

 

Microservices Architecture Overview: New Challenges for Monolithic Architecture

As an application grows, so does the amount of code written. This can quickly overwhelm the development environment every time it needs to be opened and run.
As you must deploy everything in one place, this approach means that the transition to another programming language, or other technologies becomes a big problem.

In addition, if any components stop working, the entire application goes down.Scaling a monolithic application can only be completed by raising another of the same application. It’s extremely difficult to scale only one component.

The more extensive the application, the more critical it is for developers to be able to divide the application into smaller workable parts to limit these challenges. Because all modules in a monolithic application are related to each other, developers cannot work independently. Since developers depend on each other, then development time increases, negatively impacting productivity and deployment time.

Microservices break down an extensive application into loosely coupled modules that communicate with each other via APIs. Unlike monolithic architectural applications, the use of microservices:

  1. Improves component failure isolation: Large applications can continue to run efficiently even if a single module fails.
  2. Removes application commitment to one technology stack: If you want to try a new technology stack on some service – go right ahead.
  3. Reduces dependencies: Dependencies will be much easier than a monolithic architecture, and it will be much easier to roll everything back if necessary.
  4. Adds simplicity: It’s much easier for new employees to understand the service’s functionality, and the less code in one application, the easier it is to work.

 

Introduction to Amazon Elastic Container Service (Amazon ECS)

Amazon ECS is a highly scalable, high-performance container orchestration service. It supports Docker containers and makes it easy to run and scale container-based applications on AWS. Amazon ECS eliminates the need for installing and using your container orchestration software. You don’t need to manage and scale VM clusters, and you don’t need to plan containers on those VMs.

Using Amazon ECS there are also management benefits. You can start/stop Docker applications with simple API calls and get the whole application state on demand. Use many typical features such as IAM roles, security groups, load balancers, Amazon CloudWatch Events, AWS CloudFormation templates, and AWS CloudTrail logs.

Top benefits of using this solution:

  1. Out of the box autoscaling.
  2. Full integration with IAM. You can restrict tasks by rights using IAM roles.
  3. Full integration with Load Balancers.
  4. Ability to make service discovery.
  5. Easy and low-resource installation, as in AWS ECS, the cluster is managed only by the ECS agent.
  6. No single point of failure.
  7. Low learning curve eliminates barrier to entry
  8. Easy update of the cluster via the ECS agent
  9. Ability to create scheduled tasks.

 

ECS Glossary

ECS agent is a container installed on the instance and is responsible for scheduling containers (starting, stopping, restarting).

ECS task definition – container description file about resources, environment variables, ports etc., in JSON format.

ECS service describes how many containers (Task definition) and how to run them.

ECS cluster – EC2 instances on which the ECS agent is installed.

Amazon ECS launch types

Fargate: You can use the Fargate launch type to run your containerized applications without providing and managing the underlying infrastructure. AWS Fargate is the serverless way to host your Amazon ECS workloads.

EC2: You can use the EC2 launch type to run your containerized applications on Amazon EC2 instances. You can register these to your Amazon ECS cluster and manage the infrastructure yourself.

External:  This type is used to run your containerized applications on any on-premises server or virtual machine that you register to your Amazon ECS cluster and then manage this remotely. For more information, see External instances (Amazon ECS Anywhere).

How to Deploy EC2 in AWS

Today, I will show you how to quickly deploy the EC2 launch type in the AWS UI.
We will first deploy the usual Nginx in manual mode to understand the main options. I will deploy to North Virginia because, in 99% of cases, all new services are arriving for North Virginia, Ireland and Oregon. I like AWS because it allows you to test an idea quickly by creating a product using a wizard.

Now our architecture looks like this.

  1. Select ECS and use the standard blue button “create a cluster”.
  2. Select cluster template.
  3. Configure cluster. Best practices are to choose Cluster name, Provisioning Model, Number of instances, and Networking and tags. For CloudWatch Container Insights select enable. Otherwise, without CloudWatch, you will only see CPU Utilization, Memory Utilization, CPU Reservation, and Memory Reservation. Click Create.
  4. Go to CloudFormation and view what resources the wizard created.
  5. View Clusters. You will now be able to see that we have the first cluster.
  6. View Instances. In this case, three instances have Each contains an installed ECS agent. A complete list of Amazon ECS-optimized AMIs can be found here.
  7. See Configured Open Ports below. Configuration open inbound ports on our instances.
  8. Deploy Nginx via Task Definition. Here, we enter Task definition name, Network mode, Task size and Nginx container parameters.

9. Create service via Services. Create on the Cluster page. Enter Launch type, Service name, Placement Templates, Load balancer type, and optionally, Set Auto Scaling.




10. You did it! We have created a demo service. Its operation can be checked through access to 9000 and you can view the Docker log in the node.

 

Summary

In part one, I’ve shown you how to  create a cluster and task definition (a template for running a docker container) and successfully launched the service.

In part two, we go further, and I will show you how to Deploy an ECS cluster using Terraform code and how to send application logs via fluent to AWS Elasticsearch. See you next time!