Hi All ...
In the fourth part, we'll examine all advanced settings of VM Configuration, like: RDM, NPIV & NUMA, as well as some troubleshooting guides.
Credits:
- Mohammed Al-Baqari
- Cormac Hogan
- Frank Denneman
Now, let's begin...
1. Enabling Copy-Paste in VMs Console:
KB article by VMware about how to enable copy-paste operation from VM console to local client machine:
VMware KB: Clipboard Copy and Paste does not work in vSphere Client 4.1 and later
2. For Error “Unable to connect to the MKS: Failed to connect to server xxx.xxx.xxx.xxx:902”:
It’s either because of ports issue and hence port 902 must be opened between vSphere Client machine and ESXi hosts or because of DNS problem that vSphere Client machine can’t resolve the
host name of ESXi hosts.
3. Raw Device Mapping (RDM):
It's presenting entire LUN to a VM through hypervisor to deal with it directly like a local disk which leads to maximum performance. Used for high-size disks or high-performance applications.
The following blog post by Mohammed Al-Baqari explaining and deep diving inside RDM:
VMware Technologies Blog: Raw Device Mapping (RDM) DeepDive
Another article by Cormac Hogan and officially released by VMware about RDM and migration:
Migrating RDMs, and a question for RDM Users. | VMware vSphere Blog - VMware Blogs
The following official articles are a comparison between types of RDM (Physical Mode/Virtual Mode):
VMware KB: Difference between Physical compatibility RDMs and Virtual compatibility RDMs
vSphere 5.5 Documentation Center - RDM Virtual and Physical Compatibility Modes
Last article is from VMware and about limitation of RDM:
vSphere 5.5 Documentation Center - RDM Considerations and Limitations
4. Virtual Non-Uniform Memory Access (vNUMA):
Non-Uniform Memory Access (NUMA) configuration means that every physical CPU -on the physical machine that contains many physical CPUs- has certain amount of RAM locally attached (has direct access) to it while it can access RAM directly attached to another physical CPU but with performance penalty and higher latency.
In virtualization environments and for monster VMs -VMs with more than 8 vCPUs by default- CPU Scheduler tries to manipulate the same physical NUMA configuration but on VM level, i.e. puts all processes from certain vCPUs that belong to same VM and same virtual CPU socket (vSocket), on the same physical CPU to take advantage of Cache and closer RAM attached as well as executing these processes in the closer RAM to lower the latency.
Keep in mind the following:
1-) No virtual cores per virtual socket more than number of physical cores of single physical CPU -without taking Hyper-threading in consideration- to prevent dividing processing on two NUMA nodes which will affect performance.
2-) No virtual sockets per VM more than actual number of physical CPU sockets.
3-) No virtual RAM more than RAM amount in single NUMA node to prevent dividing processing on two NUMA nodes which will affect performance.
5. Applying NUMA and Virtual NUMA (vNUMA) Advanced Settings:
1-) NUMA Settings:
To apply advanced settings of NUMA configuration on any host:
Host-> Configuration-> Advanced Settings
There‘re many advanced settings available that’re summarized in the following official article by VMware:
vSphere 5.5 Documentation Center - Advanced NUMA Attributes
2-) vNUMA Settings:
For vNUMA settings, these will be changed on each VM running on the host. To change any one, the VM must be powered off:
VM Edit Settings-> Options-> General -> Configuration Parameters
Then, enter any setting you want to change and its desired value of the following listed:
vSphere 5.5 Documentation Center - Advanced Virtual NUMA Attributes
6. N-Port ID Virtualization (NPIV):
It’s used to zone VMs directly through FC Fabric to the FC Storage. For vSphere host, the port in its HBA will be N_Port (End Port) behind which many virtual
HBA ports from different VMs will be zoned to the FC Storage through FC Fabric using their “virtual” WWPNs.
So limited use cases and it’s required to configure both for FC Fabric and VMs to use NPIV and RDM must be used with it.
For detailed information, check the following official article on VMware Blogs by Cormac Hogan:
NPIV: N-Port ID Virtualization | VMware vSphere Blog - VMware Blogs
7. Monitor Execution Modes:
Every VM has a small software layer that abstract the physical hardware layer from it and this software layer is called VM Monitor (VMM). VMM is a part from VMKernel and responsible for
virtualizing of physical CPU and Memory (CPU Instruction Set and Memory Management Unit (MMU) respectively). To execute that virtualization, we have three virtualization modes (Binary Translation Mode, Hardware Mode and Hardware Mode with MMU). All these modes virtualize both CPU and Memory using either Software Virtualization, Hardware Virtualization or Para-virtualization. The following table is a small comparison between Software Virtualization Mode and Hardware Virtualization Mode:
Software Virtualization Mode | Hardware Virtualization Mode |
Used when the physical hardware doesn’t support virtualization of CPU and MMU. | Used when the physical hardware supports virtualization of MMU and CPU. |
Runs VMM in Ring 0 of physical CPU. So, it fools Guest OS but performing binary translation | Runs VMM in Root Level of physical CPU and Guest OS in Ring 0. |
Software Virtualization of MMU uses shadow pages tables which is slower as well as uses high CPU overhead | Hardware Virtualization of MMU uses AMD Rapid Virtualization Indexing (AMD-RVI) or Intel Extended Page Tables (Intel-EPT) which is better in performance. |
Slower than HW Execution Mode. | Better than SW Execution Mode. |
Also, the following table is also a comparison between VMM modes:
| BT Mode | HV Mode | HVMMU |
MMU Virtualization | Shadow Pages Tables (SPT - Software) | Shadow Pages Tables (SPT - Software) | AMD-V with RVI or Intel VT-x with EPT (Hardware) |
CPU Virtualization | Binary Translation of Calls (Software) | AMD-V or Intel VT-x (Hardware) | AMD-V or Intel VT-x (Hardware) |
Performance | Worst | Better | Best |
Last, for changing VMM Execution mode, check the following KB article from VMware:
VMware KB: Changing the virtual machine monitor mode
8. CPU Affinity Rules:
This advanced option is used to limit processing of a VM’s vCPUs to certain physical core(s). It does not dedicate that CPU to that virtual machine only and therefore does not restrict the CPU scheduler from using that CPU for other virtual machines.
For detailed information, check the following article by Frank Denneman about this feature:
Beating a dead horse - using CPU affinity - frankdenneman.nl
Share the knowledge ...
Previous: vSphere 5.x Notes & Tips - Part III: