Azure Kubernetes Service (AKS) continues to evolve, offering more efficient ways to manage resources in your Kubernetes clusters. The introduction of AKS 1.29 marks a significant shift in how memory reservations are handled, impacting how you plan and deploy your clusters. This post delves into these changes and what they mean for your AKS environment.
The Discrepancy Between Total and Allocatable Resources
In AKS, a portion of node resources is reserved to support the node’s function within the cluster. This reservation creates a difference between the node’s total resources and what is available for user-deployed pods, known as allocatable resources.
Finding Allocatable Resources
To determine a node’s allocatable resources, use the following command:
1 |
kubectl describe node <nodename> |
This command provides detailed information about the node’s resources, including those reserved by AKS.
Understanding Resource Reservations
Resource reservations in AKS are critical to maintaining node health and performance, with specific allocations for both CPU and memory.
Reserved CPU
CPU reservations depend on the node type and configuration. They follow a structured scale based on the number of CPU cores. Here’s a quick look at how CPU reservations scale with the number of CPU cores:
CPU cores on host | Kube-reserved (millicores) |
1 | 60 |
2 | 100 |
4 | 140 |
8 | 180 |
16 | 260 |
32 | 420 |
64 | 740 |
Reserved Memory
Memory reservations in AKS are a bit more complex and have undergone changes in AKS 1.29 and later versions.
AKS Versions Prior to 1.28
- Memory Eviction Rule: A memory.available<750Mi eviction rule was the standard, ensuring at least 750Mi of allocatable memory for the node. When the node is below this threshold the kubelet will evict a running pod to free up memory on the node.
- Regressive Memory Reservation: To allow the kubelet daemon to properly function a tiered system for memory reservation was used, based on the total memory available.
- 25% of the first 4GB of memory
- 20% of the next 4GB of memory (up to 8GB)
- 10% of the next 8GB of memory (up to 16GB)
- 6% of the next 112GB of memory (up to 128GB)
- 2% of any memory above 128GB
AKS 1.28 and Later
- Memory Eviction Rule: AKS now implements a memory.available<100Mi eviction rule by default. This rule ensures that at least 100Mi of memory is always allocatable.
- Memory Reservation Rate: Memory reservations are set at the lesser value of either 20MB per max pod supported on the node plus 50MB or 25% of the total system memory.
Example Calculation for AKS 1.29
Let’s consider a VM with 16GB of memory, supporting up to 50 pods:
- Maximum Pod-Based Reservation: 20MB * 50 Max Pods + 50MB = 1050MB
- Percentage-Based Reservation: 25% * 16GB = 4096MB
Since the lesser of the two is 1050MB:
- Total kube-reserved Memory: 1050MB
- Allocatable Memory: 16GB – 1.05GB (kube-reserved) – 0.1GB (eviction threshold) ≈ 14.85GB
This example illustrates how the new reservation model can lead to more efficient memory usage compared to previous versions.
Impact on Nodes
These resource reservations have several implications:
- They keep agent nodes healthy, ensuring the smooth operation of system pods critical to cluster health.
- Nodes will report less allocatable memory and CPU than if they were not part of a Kubernetes cluster.
- Such reservations are immutable and cannot be changed.
Additional Considerations
Besides Kubernetes itself, the underlying node OS also reserves CPU and memory to maintain its functions. It’s essential to factor in these additional reservations when planning your AKS deployment.
Conclusion
The shift in AKS’s resource reservation strategy, particularly in version 1.29 and later, provides a more efficient and streamlined approach to managing node resources. Understanding these changes allows you to better plan and optimize your Kubernetes clusters for performance and scalability.
0 Comments