Azure Kubernetes CPU multithreading Azure Kubernetes CPU multithreading kubernetes kubernetes

Azure Kubernetes CPU multithreading


I'm afraid there is no easy answer to your question, while planning the right size of VM Node Pools for Kubernetes cluster to fit appropriately your workload requirements for resource consumption . This is a constant effort for cluster operators, and requires you to take into account many factors, let's mention few of them:

  1. What Quality of Service (QoS) class (Guaranteed, Burstable, BestEffort) should I specify for my Pod Application, and how many of them I plan to run ?

  2. Do I really know the actual usage of CPU/Memory resources by my app VS. how much of VM compute resources stay idle ? (any on-prem monitoring solution in place right now, that could prove it, or be easily moved to Kubernetes in-cluster one ?)

  3. Do my cluster is multi-tenant environment, where I need to share cluster resources with different teams ?

  4. Node (VM) capacity is not the same as total available resources to workloads

You should think here in terms of cluster Allocatable resources:

Allocatable = Node Capacity - kube-reserved - system-reserved

In case of Standard_D16ds_v4 VM size in AZ, you would have for workloads disposal: 14 CPU Cores not 16 as assumed earlier.

I hope you are aware, that scpecified through args number of CPUs:

   args:    - -cpus    - "2"

is app specific approach (in this case the 'stress' utility written in go), not general way to spawn a declared number of threads per CPU.

My suggestion:

To avoid over-provisioning or under-provisioning of cluster resources to your workload application (requested resource VS. actually utilized resources), and to optimize costs and performance of your applications, I would in your place do a preliminary sizing estimation on your own of VM Node Pool size and type required by your SpringBoot multithreaded app, and thus familiarize first with concepts like bin-packing and app right-sizing. For these two last topics I don't know a better public guide than recently published by GCP tech team:

"Monitoring gke-clusters for cost optimization using cloud monitoring"

I would encourage you to find an answer to your question by your self. Do the proof of concept on GKE first (with free trial), replace in quide above the demo app with your own workload, come back here, and share your own observation, would be valuable for others too with similar task !


First of all, please note that Kubernetes CPU is an absolute unit:

Limits and requests for CPU resources are measured in cpu units. Onecpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providersand 1 hyperthread on bare-metal Intel processors.

CPU is always requested as an absolute quantity, never as a relativequantity; 0.1 is the same amount of CPU on a single-core, dual-core,or 48-core machine

In other words, a CPU value of 1 corresponds to using a single core continiously over time.

The value of resources.requests.cpu is used during scheduling and ensures that the sum of all requests on a single node is less than the node capacity.

When you create a Pod, the Kubernetes scheduler selects a node for thePod to run on. Each node has a maximum capacity for each of theresource types: the amount of CPU and memory it can provide for Pods.The scheduler ensures that, for each resource type, the sum of theresource requests of the scheduled Containers is less than thecapacity of the node. Note that although actual memory or CPU resourceusage on nodes is very low, the scheduler still refuses to place a Podon a node if the capacity check fails. This protects against aresource shortage on a node when resource usage later increases, forexample, during a daily peak in request rate.

The value of resources.limits.cpu is used to determine how much CPU can be used given that it is available, see How pods with limist are run

The spec.containers[].resources.limits.cpu is converted to itsmillicore value and multiplied by 100. The resulting value is thetotal amount of CPU time in microseconds that a container can useevery 100ms. A container cannot use more than its share of CPU timeduring this interval.

In other words, the requests is what the container is guaranteed in terms of CPU time, and the limit is what it can use given that it is not used by someone else.

The concept of multithreading does not change the above, the requests and limits apply to the container as a whole, regardless of how many threads run inside. The Linux scheduler do scheduling decisions based on waiting time, and with containers Cgroups is used to limit the CPU bandwidth. Please see this answer for a detailed walkthrough: https://stackoverflow.com/a/61856689/7146596

To finally answer the question

Your on premises VM has 4 cores, operating on 2,5 GHz, and if we assume that the CPU capacity is a function of clock speed and number of cores, you currently have 10 GHz "available"

The CPU's used in standard_D16ds_v4 has a base speed of 2.5GHz and can run up to 3.4GHz or shorter periods according to the documentation

The D v4 and Dd v4 virtual machines are based on a custom Intel® Xeon®Platinum 8272CL processor, which runs at a base speed of 2.5Ghz andcan achieve up to 3.4Ghz all core turbo frequency.

Based on this specifying 4 cores should be enough ti give you the same capacity as onpremises.

However number of cores and clock speed is not everything (caches etc also impacts performance), so to optimize the CPU requests and limits you may have to do some testing and fine tuning.