VMworld 2017 Roundup: Day 4

VMworld “hump day” is in the bag. This is the day that’s sort of in the middle of VMworld and is distinguished by the lack of General Session in the morning and Customer Appreciation event in the evening. Let’s recap.
vSphere 6.5 Host Resources Deep Dive: Part 2 [SER1872BU]
Last year was part one of Frank Denneman and Niels Hagoort‘s vSphere 6.5 Host Resource Deep Dive. This year they focused on CPU related concerns.
First up, Frank focused on non-uniform memory access, or NUMA for short. He pointed out that the best user experience is obtained by providing consistent performance. One of the ways of achieving that is by reducing “moving parts” (complexity). Also, don’t forget that when troubleshooting performance in a virtual environment that you shouldn’t only focus on the affected VM(s). Other VMs/components that share resources and/or hosts could be contributing or also affected.
There’s an advanced setting, Numa.LocalityWeightActionAffinity , that you can consider if it turns out that CPU assignment isn’t balanced well for your particular workload across NUMA boundaries. The default is 130 (but 130 what Frank hasn’t found out yet). Set the value of this setting to zero to prevent NUMA load balancing. This is a per host setting, so it would have to be set on every host in the cluster.
Note that, as an advanced setting, this shouldn’t be adjusted unless you’ve determined that you definitely have a performance concern that would be improved by setting this. The same is true for any advanced setting. Setting an advanced setting to something other than the default should be seriously considered, and with good reason, otherwise those wouldn’t be the default values!
Frank pointed out the when you read PCPU in VMware’s documentation that it refers to either a physical processor core or hyper-threading. There’s no guarantee of one or another out-of-the-box as ESXi decides how to schedule CPU time. If a vCPU is scheduled to run on a physical core, it’s considered to be charged for 100% of its CPU time. If a vCPU is scheduled to run on hyper-threading “core”, it’s considered to be charged 62.5% of its CPU time, meaning that it’s immediately put back into the scheduling queue to satisfy the outstanding 37.5%. This is to make sure that if a vCPU is scheduled to run on HT that it ultimately gets the same CPU time as a vCPU scheduled to run on a core, however being scheduling multiple times does impact performance.
The NUMA scheduler will balance workloads across NUMA nodes for VMs where vCPUs are less than the NUMA boundary. For large VMs (from a vCPU view), try to mirror the physical NUMA domains for absolute best performance. The NUMA scheduler will use the CPU core count for client sizing as well. There is a per-VM advanced setting, numa.vcpu.preferHT that, when set to True will make the NUMA scheduler consider HT in addition to cores. This can keep a VM within a NUMA boundary but, like all advanced settings, you should have a good reason to change this behaviour for a particular VM. This may be good for VMs that can benefit from maintaining memory locality, but not for those VMs with high CPU load. See VMware KB 2003582 for more info.
The NUMA client always considers CPU cores, and not memory locality. The advanced setting, numa.consolidate , when set to False will help distribute based on memory. Did we mention being careful about advanced settings? Pretty sure we did.
Frank covered the impact of CPU sizing on storage performance, and how selection of storage for particular use cases (such as vSAN caching tier versus capacity tier) needs to be considered along with your CPU to ensure best performance. Hint: Intel Optane with 3D Xpoint makes for an excellent vSAN caching tier but is wasted on the capacity tier.
Neils then discussed the impacts of CPU utilization on network latency. Traditionally NICs rely on interrupts to have packets picked up by the CPU and put into memory. If the CPU is heavily utilized then interrupt generation can be affected and in turn introduce latency on the local NIC. A newer technology, vRDMA aims to address this by allowing NICs to directly access memory, bypassing the ESXi kernel entirely. This isn’t broadly available yet as it requires newer hardware (or at least updated firmware) and, naturally, vendor support.
Oh, and, if you haven’t already, don’t forget to pick up the book!
New Post: VMworld 2017 Roundup: Day 4 https://t.co/nOwpvaUhXY #vmworld
ICYMI: VMworld 2017 Roundup: Day 4 https://t.co/SZHKGi3VCG #vmworld
ICYMI: VMworld 2017 Roundup: Day 4 https://t.co/YuFtPvQKtc #vmworld #ICYMI
ICYMI: VMworld 2017 Roundup: Day 4 https://t.co/SJlMIctZPG #vmworld #vExpert
ICYMI: VMworld 2017 Roundup: Day 4 https://t.co/b04shlNxjq #vmworld #ICYMI #vExpert