- Tune Virtual Machine memory configurations
- Tune Virtual Machine networking configurations
- Tune Virtual Machine CPU configurations
- Tune Virtual Machine storage configurations
- Calculate available resources
- Properly size a Virtual Machine based on application workload
- Modify large memory page settings
- Understand appropriate use cases for CPU affinity
- Configure alternate virtual machine swap locations
Tune Virtual Machine memory configurations
vSphere Virtual Machine Administration, Chapter 8 “Configuring Virtual Machines”, Section “Virtual Machine memory Configuration”, page 104.
- Changing the configuration can be done with the vSphere Client or vSphere Web Client.
- The maximum amount of Virtual Machine Memory depends on the Virtual Machine Version
- Know about Limits, Reservations and Shares (a VCP5 should…)
- Memory Hot Add feature, add memory while VM is powered on. Although configuration must be done while VM is powered off…
- Associate Memory Allocation with a NUMA Node. If NUMA is available, see under Resources and memory Section
- Change the Swap File Location.
Tune Virtual Machine networking configurations
vSphere Virtual Machine Administration, Chapter 8 “Configuring Virtual Machines”, Section “Network Virtual Machine Configuartion”, page 111.
- Know about Network Adapter Types;
- Know how to configure, adjust MAC address and Connection status ;
- Why you should choose the VMXNET3 adapter in this post.
Tune Virtual Machine CPU configurations
vSphere Virtual Machine Administration, Chapter 8 “Configuring Virtual Machines”, Section “Virtual CPU Configuartion”, page 92.
- Know the terminology about: CPU, CPU socket, Core, Corelets, Threads and so on;
- Configuring Multicore Virtual CPUS. There are some limitations and considerations on this subject, like; ESXi host configuration, VMware License, Guest OS (license) restrictions and so on. Now you can decide the number of virtual sockets and the number of cores per socket.
- CPU Hot Plug Settings. Same story as Memory Hot Add feature.
- Allocate CPU resources.
- Configuring Advanced CPU Scheduling Settings, which means configuring Hyperthreaded Core Sharing and Processor Scheduling Affinity.
- Change CPU Identification Mask Settings.
- Hide the NX/XD flag from guest
Increases vMotion compatibility. Hiding the NX/XD flag increases vMotion compatibility between hosts, but might disable certain CPU security features.
Expose the NX/XD flag to guest
Keeps all CPU security features enabled
- Change CPU/MMU Virtualization Settings.
- See also Objective 3.1 on this topic.
Tune Virtual Machine storage configurations
vSphere Virtual Machine Administration, Chapter 8 “Configuring Virtual Machines”, Section “Virtual Disk Configuration”, page 126.
- Know the Virtual Disk Provisioning policies.
NFS datastores with Hardware Acceleration (See VAAI) and VMFS datastores support the following disk provisioning policies.
On NFS datastores that do not support Hardware Acceleration, only thin format is available.
- Thick Provision Lazy Zeroed
Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.
- Thick Provision Eager Zeroed
A type of thick virtual disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created
- Thin Provision
Thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations
- Thick Provision Lazy Zeroed
- Know the difference between: Dependent, Independent-Persistent and Independent-Nonpersistent Virtual disks.
- Dependent, affected by vSphere snapshots
- Independent-Persistent , not affected by vSphere snapshots. Changes are immediately and permanently written to disk
- Independent-Nonpersistent, not affected by vSphere snapshots. Changes to disk are discarded when you power off or revert to snapshot.
- RDM’s, see objectieve 1.1
- Use Disk Shares, see also topic on SIOC.
- Convert a Virtual Disk from thin to thick.
Calculate available resources
vSphere Resource Management Guide,
Chapter 1 “Getting Started with Resource Management”, page 9 and
Chapter 2 “Configuring Resource Allocation Settings”, page 11
In VMware’s terminology, Resource management is the allocation of resources from resource providers to resource consumers.
VMware recognizes the following Resource types: CPU, Memory, Power, Storage and network resources.
What are resource providers?
- Host and Clusters;
- Datastore Clusters
For hosts, available resources are the host’s hardware specification, minus the resources used by the virtualization software.
What are resource consumers?
- Virtual Machines
Admission Control: When you power on a virtual machine, the server checks whether enough unreserved resources are available and allows power on only if there are enough resources.
Resource Pools: A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to hierarchically partition available CPU and memory resources.
ESXi hosts allocate each virtual machine a portion of the underlying hardware resources based on a number of factors:
- Total available resources for the ESXi host (or the cluster).
- Number of virtual machines powered on and resource usage by those virtual machines.
- Overhead required to manage the virtualization.
- Resource limits defined by the user.
Resource allocation settings are used to determine the amount of CPU, memory, and storage resources provided for a virtual machine. In particular, administrators have several options for allocating resources.
Resource allocation settings are:
- ResourceAllocation Shares;
- ResourceAllocation Reservations;
- ResourceAllocation Limits.
Shares specify the relative importance of a virtual machine (or resource pool).
A reservation specifies the guaranteed minimum allocation for a virtual machine.
Limit specifies an upper bound for CPU, memory, or storage I/O resources that can be allocated to a virtual machine.
All this should be familiar for a VCP…
Properly size a Virtual Machine based on application workload
vSphere Virtual Machine Administration,
In the previous objective, various components that makes a virtual machine configuration were discussed.
As a rule of thumb, I do usually configure a Virtual Machine with a “minimal” configuration. Where possible choose efficient virtual NICs (VMXNET3) and the Paravirtual SCSI controller. Do not configure virtual hardware you do not need like floppy drive and CD/DVD drives, because they consume valuable resources, even if not used.
When the VM is finished and operational it is time to evaluate the performance and if necessary add extra Memory, a vCPU or perform a Storage vMotion to faster Storage (if available).
- Also good reading Performance Best Practices for VMware vSphere 5.0, Chapter 2 “ESXi and Virtual Machines”
Modify large memory page settings
See “Other references” for some documents
What is/are Large Memory Pages?
According to the “Performance Best Practices for VMware vSphere 5.0”:
“In addition to the usual 4KB memory pages, ESXi also provides 2MB memory pages (commonly referred to as “large pages”). By default ESXi assigns these 2MB machine memory pages to guest operating systems that request them, giving the guest operating system the full advantage of using large pages. The use of large pages results in reduced memory management overhead and can therefore increase hypervisor performance.
If an operating system or application can benefit from large pages on a native system, that operating system or application can potentially achieve a similar performance improvement on a virtual machine backed with 2MB machine memory pages.”
There also seems to be a downside on the usage of Large memory Pages:
“Use of large pages can also change page sharing behavior. While ESXi ordinarily uses page sharing regardless of memory demands, it does not share large pages. Therefore with large pages, page sharing might not occur until memory overcommitment is high enough to require the large pages to be broken into small pages. For further information see VMware KB articles 1021095 and 1021896.”
The Large memory Pages settings are part of the Advanced Settings. A few parameters are discussed in the vSphere Resource Management Guide
- Performance Best Practices for VMware vSphere 5.0, Section “Large Memory Pages for Hypervisor and Guest Operating System”, Page 28
- Large Page Performance study, http://www.vmware.com/resources/techresources/1039
- VMware KB “Transparent Page Sharing (TPS) in hardware MMU systems” http://kb.vmware.com/kb/1021095
- VMware KB “Use of large pages can cause memory to be fully allocated” http://kb.vmware.com/kb/1021896
Understand appropriate use cases for CPU affinity
vSphere Resource Management Guide,
Chapter 3, CPU Virtualization Basics, Page 15 and also
Chapter 4, Administering CPU Resources, Page 17
See also Objective 3.1
The general idea for CPU affinity is to reserve CPU capacity for a specific Virtual Machine. There are a few considerations concerning this subject.
CPU affinity specifies virtual machine-to-processor placement constraints and is different from the relationship created by a VM-VM or VM-Host affinity rule.
The term CPU refers to a logical processor on a hyperthreaded system and refers to a core on a non-hyperthreaded system.
Warning: Be careful when using CPU affinity on systems with hyper-threading. Because the two logical processors share most of the processor resources, pinning vCPUs, whether from different virtual machines or from a single SMP virtual machine, to both logical processors on one core (CPUs 0 and 1, for example) could cause poor performance.
CPU affinity setting for a virtual machine applies to:
- all of the virtual CPUs associated with the virtual machine
- to all other threads (also known as worlds) associated with the virtual machine. Those threads perform processing required for emulating mouse, keyboard, screen, CD-ROM, and miscellaneous legacy devices.
Here is also a pitfall, Performance might degrade if the virtual machine’s affinity setting prevents these additional threads from being scheduled concurrently with the virtual machine’s virtual CPUs.
If this is the case, VMware recommends adding an extra physical CPU in the affinity settings.
The vSphere Resource Management Guide presents an overview of even more potential issues:
- For multiprocessor systems, ESXi systems perform automatic load balancing. Avoid manual specification of virtual machine affinity to improve the scheduler’s ability to balance load across processors.
- Affinity can interfere with the ESXi host’s ability to meet the reservation and shares specified for a virtual machine.
- Because CPU admission control does not consider affinity, a virtual machine with manual affinity settings might not always receive its full reservation.
Virtual machines that do not have manual affinity settings are not adversely affected by virtual machines with manual affinity settings.
- When you move a virtual machine from one host to another, affinity might no longer apply because the new host might have a different number of processors.
- The NUMA scheduler might not be able to manage a virtual machine that is already assigned to certain processors using affinity.
- Affinity can affect the host’s ability to schedule virtual machines on multicore or hyperthreaded processors to take full advantage of resources shared on such processors (see previous warning).
Configure alternate virtual machine swap locations
vSphere Resource Management Guide, Chapter 6, Administering Memory Resources, Section “Using Swap Files” Page 32.
See also Objective 3.1, section Tune ESXi host memory configuration.