VCAP5-DCA Objective 3.3 – Implement and maintain complex DRS solutions



  • Properly configure BIOS and management settings to support DPM
  • Test DPM to verify proper configuration
  • Configure appropriate DPM Threshold to meet business requirements
  • Configure EVC using appropriate baseline
  • Change the EVC mode on an existing DRS cluster
  • Create DRS and DPM alarms
  • Configure applicable power management settings for ESXi hosts
  • Properly size virtual machines and clusters for optimal DRS efficiency
  • Properly apply virtual machine automation levels based upon application requirements
  • Create and administer ESXi host and Datastore Clusters
  • Administer DRS / Storage DRS

Properly configure BIOS and management settings to support DPM

Official Documentation:
vSphere Resource Management Guide, Chapter 10 “Using DRS Clusters to Manage Resources”, Section “Managing Power Resources”, page 67.

Some background on this subject.

The Distributed Power Management (DPM) feature allows a DRS cluster to reduce its power consumption by powering hosts on and off based on cluster resource utilization.

DPM can use one of three power management protocols to bring a host out of standby mode:

  1. Intelligent Platform Management Interface (IPMI)
  2. Hewlett-Packard Integrated Lights-Out (iLO)
  3. Wake-On-LAN (WOL)

If a host supports multiple protocols, they are used in the order presented above.

If a host does not support any of these protocols it cannot be put into standby mode by vSphere DPM.

Each protocol requires its own hardware support and configuration, hence BIOS and Management Settings will vary depending on the hardware (vendor).

Note: DPM is complementary to host power management policies (See Objective 3.1, Section on Tune ESXi host CPU configuration). Using DPM and host power management together can offer greater power savings than when either solution is used alone.

Example, configuring a Dell R710 server with an iDRAC (Dell Remote access solution) for DPM. A Dell R710 contains also a BMC, which is also needed.

The iDRAC supports IPMI, but out-of-the-box, this feature is disabled.

So, log on to the iDRAC, go to “iDRAC settings”, section “Network Security” and enable IPMI Over LAN.

Figure 1

Read the rest of this entry »

VCAP5-DCA Objective 3.2 – Optimize virtual machine resources



  • Tune Virtual Machine memory configurations
  • Tune Virtual Machine networking configurations
  • Tune Virtual Machine CPU configurations
  • Tune Virtual Machine storage configurations
  • Calculate available resources
  • Properly size a Virtual Machine based on application workload
  • Modify large memory page settings
  • Understand appropriate use cases for CPU affinity
  • Configure alternate virtual machine swap locations

Tune Virtual Machine memory configurations

Official Documentation:
vSphere Virtual Machine Administration, Chapter 8 “Configuring Virtual Machines”, Section “Virtual Machine memory Configuration”, page 104.


  • Changing the configuration can be done with the vSphere Client or vSphere Web Client.
  • The maximum amount of Virtual Machine Memory depends on the Virtual Machine Version
  • Know about Limits, Reservations and Shares (a VCP5 should…)
  • Memory Hot Add feature, add memory while VM is powered on. Although configuration must be done while VM is powered off…

Figure 1

Read the rest of this entry »

VCAP5-DCA Objective 3.1 – Tune and Optimise vSphere performance



  • Tune ESXi host memory configuration
  • Tune ESXi host networking configuration
  • Tune ESXi host CPU configuration
  • Tune ESXi host storage configuration
  • Configure and apply advanced ESXi host attributes
  • Configure and apply advanced Virtual Machine attributes
  • Configure advanced cluster attributes
  • Tune and optimize NUMA controls

Tune ESXi host memory configuration

Official Documentation:
vSphere Resource Management Guide,
Chapter 5, Memory Virtualization Basics, Page 25 and also
Chapter 6, Administering Memory Resources, Page 29

vSphere Monitoring and Performance Guide, Chapter 1, Monitoring Inventory Objects with Performance Charts, Section “Solutions for memory Performance Problems”, Page 19. This section contains some information on troubleshooting memory issues

Performance Best Practices for VMware vSphere 5.0, Chapter 2 ESXi and Virtual Machines, Section ESXi Memory Considerations, page 25

Chapter 5 Memory Virtualization Basics of the vSphere Resource Management Guide explains the concepts of memory resource management. It is very useful to know what happens when turning the knobs…

Some Highlights to test your knowledge

  • Know the difference between shares, reservations and limits;
  • What is memory Over commitment
  • The principles of Software-Based Memory Virtualization vs. Hardware-Assisted Memory Virtualization;
  • Is Hardware-Assisted Memory Virtualization always better?

Chapter 6 Administering Memory Resources discusses subjects like:

  • Understanding Memory Overhead
  • How ESXi Hosts Allocate Memory
    Details on the use of Limits, Reservations, Shares and Working Set Size
  • VMX Swap files
    To avoid more confusion, in a few words:
    ESXi reserves memory per virtual machine for a variety of purposes. Memory for the needs of certain components, such as the virtual machine monitor (VMM) and virtual devices, is fully reserved when a virtual machine is powered on. However, some of the overhead memory that is reserved for the VMX process can be swapped. The VMX swap feature reduces the VMX memory reservation significantly (for example, from about 50MB or more per virtual machine to about 10MB per virtual machine).The host creates VMX swap files automatically, provided there is sufficient free disk space at the time a virtual machine is powered on.
  • Memory Tax for Idle Virtual Machines
    You can modify the idle memory tax rate with the Mem.IdleTax option. Use this option, together with the Mem.SamplePeriod advanced attribute, to control how the system determines target memory allocations for virtual machines
  • Memory Reclamation
  • Using Swap Files
    By default, the swap file is created in the same location as the virtual machine’s configuration file. However it is possible to specify a datastore stored locally on a host. This is a two step process. First, adjust the Cluster settings:

Figure 1

Read the rest of this entry »

VCAP5-DCA Objective 2.4 – Administer vNetwork Distributed Switch settings



  • Understand the use of command line tools to configure appropriate vDS settings on an ESXi host
  • Determine use cases for and apply Port Binding settings
  • Configure Live Port Moving
  • Given a set of network requirements, identify the appropriate distributed switch technology to use
  • Configure and administer vSphere Network I/O Control
  • Use command line tools to troubleshoot and identify configuration items from an existing vDS

Understand the use of command line tools to configure appropriate vDS settings on an ESXi host

Official Documentation:
Good reading on the use of CLI tools on vSphere Networking is the vSphere Command-Line Interface Concepts and Examples document. Chapter 9 “Managing vSphere Networking”,  section “Setting Up vSphere Networking with vSphere Distributed Switch”, page 122.

The CLI commands available to configure a vDS are limited. The following actions should be performed using the vSphere Client:

  • create distributed switches
  • can add hosts
  • create distributed port groups
  • edit distributed switch properties and policies

However you can add and remove uplinks with use of the command: vicfg-vswitch or esxcfg-vswitch.

To Add an uplink port.

vicfg-vswitch  --add-dvp-uplink <vmnic>  --dvp <DVPort ID> <vDS>


vicfg-vswitch  -P <vmnic> -V <DVPort ID> <vDS>

Read the rest of this entry »

VCAP5-DCA Objective 2.3 – Deploy and maintain scalable virtual networking



  • Understand the NIC Teaming failover types and related physical network settings
  • Determine and apply Failover settings
  • Configure explicit failover to conform with VMware best practices
  • Configure port groups to properly isolate network traffic

Understand the NIC Teaming failover types and related physical network settings

Official Documentation:
vSphere Networking, Chapter 5 “Networking Policies”, Section “Load balancing and Failover policies”, page 43

Load Balancing and Failover policies determines how network traffic is distributed between adapters and how to reroute traffic in the event of an adapter failure.

The Load Balancing policy is one of the available Networking Policies, such as: VLAN, Security, Traffic Shaping Policy and so on.

The Failover and Load Balancing policies include three parameters:

  • Load Balancing policy: The Load Balancing policy determines how outgoing traffic is distributed among the network adapters assigned to a standard switch. Incoming traffic is controlled by the Load Balancing policy on the physical switch.
  • Failover Detection: Link Status/Beacon Probing
  • Network Adapter Order (Active/Standby)

Editing these policies for the vSS and vDS are done in two different locations within the Vsphere Client.

vSS, Host and Clusters, Configuration, Hardware, Networking. Select the desired vSS. “NIC teaming ” tab on the vSwitch level. Override on the Portgroup level.

Figure 1 vSS

Read the rest of this entry »

VCAP5-DCA Objective 2.2 – Configure and maintain VLANs, PVLANs and VLAN settings



  • Determine use cases for and configure VLAN Trunking
  • Determine use cases for and configure PVLANs
  • Use command line tools to troubleshoot and identify VLAN configurations

Determine use cases for and configure VLAN Trunking

Updated: 14-09-2012

Official Documentation:
vSphere Networking, Chapter 7 “Advanced Networking”, Section, “VLAN Configuration”, page 68.

On a vSS you can only configure one VLAN ID per Portgroup.

A vDS allows you to configure a range of VLAN IDs per portgroup. In fact there are four options for VLAN type on a vDS:

  1. None
    VLAN tagging will not be performed by this dvPort group
  2. VLAN
    Enter in a valid VLAN ID (1-4094).  The dvPort group will perform VLAN tagging using this VLAN ID
  3. VLAN Trunking
    Enter a range of VLANs you want to be trunked
  4. Private VLAN
    Select a private VLAN you want to use – the Private VLAN must be configured first under the dvSwitch settings prior to this option being configurable

Now you can join physical VLANs to virtual networks.

Remember these VLAN IDs:
VLAN 0 = None;
VLAN 1-4094 = Valid IDs;
VLAN 4095 = All IDs.

Ingress= vDS incoming traffic
Egress = vDS outgoing traffic

Configure VLAN trunking

By default a dvUplink Group is configured for all VLAN IDs.

Figure 1

Read the rest of this entry »