Back to 2b site

Azure Auto Scaling- Why and How ?

Container flexibility is a central concern for every company that fields and utilizes data. Fortunately, Microsoft Azure offers a variety of auto scaling options to suit a variety of storage and business needs.

In this blog, I will compare the 4 core scaling methods you may use to increase or decrease your computing resources in Azure Kubernetes Service (AKS): manual, cluster autoscaler, HPA, and KEDA. The chart below will help you determine which method of scaling will best suit your needs.

1. Scaling Method: Manual

Definition: The name here is rather self-explanatory. Whereas the other three types of scaling are automated and triggered by various parameters or events, manual scaling requires your cloud manager to manually adjust the number of nodes in your AKS cluster(s).

Ideal Use Case: If you don’t have a pressing need for storage flexibility, and/or you want to be extremely precise in your data expenses, manual scaling is your best option

2. Scaling Method: Cluster Autoscaler

Definition: This vertical scaling method automatically adds or deletes nodes based on utilization metrics.

Ideal Use Case: If you tend to have large fluctuations in data, this automated scaling method ensures that you won’t come up against hardware limitations and that you won’t spend money on unnecessary storage.

3. Scaling Method: HPA (Horizontal Pod Autoscaler)

Definition: This horizontal scaling method focuses on the management of pods (or replicas) and makes scaling decisions based on the observancy utilization values that are part of a Kubernetes controller.

Ideal Use Case: HPA is a fantastic tool if you want to ensure that critical applications are elastic and can scale out to meet increasing demand and scale down to ensure optimal resource usage.

4. Scaling Method: KEDA (Kubernetes Event-Driven Autoscaling)

Definition: KEDA drives the scaling of any Kubernetes container based on the number of events that need to be processed. KEDA can take a number of event sources into account, among them, event hubs, pipelines, log analytics, monitor, service bus, and many other public cloud services.

Ideal Use Case: KEDA is ideal if you are employing other standard Kubernetes components (like the HPA) because it extends functionality without overwriting or duplication. You can set scaling parameters for multiple Kubernetes applications and frameworks.


Want to know how to deploy every Method? Check this webinar recording: