VCAP5-DCA Objective 2.4 – Administer vNetwork Distributed Switch settings

Objectives

  • Understand the use of command line tools to configure appropriate vDS settings on an ESXi host
  • Determine use cases for and apply Port Binding settings
  • Configure Live Port Moving
  • Given a set of network requirements, identify the appropriate distributed switch technology to use
  • Configure and administer vSphere Network I/O Control
  • Use command line tools to troubleshoot and identify configuration items from an existing vDS

Understand the use of command line tools to configure appropriate vDS settings on an ESXi host

Official Documentation:
Good reading on the use of CLI tools on vSphere Networking is the vSphere Command-Line Interface Concepts and Examples document. Chapter 9 “Managing vSphere Networking”,  section “Setting Up vSphere Networking with vSphere Distributed Switch”, page 122.

Summary:
The CLI commands available to configure a vDS are limited. The following actions should be performed using the vSphere Client:

  • create distributed switches
  • can add hosts
  • create distributed port groups
  • edit distributed switch properties and policies

However you can add and remove uplinks with use of the command: vicfg-vswitch or esxcfg-vswitch.

To Add an uplink port.

vicfg-vswitch  --add-dvp-uplink <vmnic>  --dvp <DVPort ID> <vDS>

Or:

vicfg-vswitch  -P <vmnic> -V <DVPort ID> <vDS>

To Remove an uplink port.

vicfg-vswitch  --del-dvp-uplink <vmnic>  --dvp <DVPort ID> <vDS>

Or:

vicfg-vswitch  -Q <vmnic> -V <DVPort ID> <vDS>

Example, with vMA, Remove uplink port vmnic1 from vDS vSwitch01:

vi-admin@vma5:/usr/bin[ml110g5]> vicfg-vswitch -l

Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks
vSwitch0        128             11              128                 1500    vmnic1

PortGroup Name                VLAN ID   Used Ports      Uplinks
VM_Clients                    210       0               vmnic1
VM_Servers                    200       0               vmnic1
VM_Internet                   2         1               vmnic1
VM_Management                 100       3               vmnic1
NFS                           2         1               vmnic1
iSCSI                         250       1               vmnic1
FT                            120       1               vmnic1
vMotion                       110       1               vmnic1
Management                    100       1               vmnic1

DVS Name                 Num Ports   Used Ports  Configured Ports  Uplinks
dvSwitch01               256         6           256               vmnic0

DVPort ID           In Use      Client
128                 1           vmnic0
129                 0
130                 0
131                 0
0                   0
11                  1           vmk5

vi-admin@vma5:/usr/bin[ml110g5] vicfg-vswitch -Q vmnic0 -V 128 dvSwitch01

Deleted uplink adapter successfully.

Note: The esxcfg-vswitch command on a ESXi host does not present a message after creating or deleting an Uplink.

You cannot use the ESXCLI command for this action.

Other references:

  • VMware KB 1008127 “Configuring vSwitch or vNetwork Distributed Switch from the command line in ESX/ESXi”

Determine use cases for and apply Port Binding settings

Official Documentation:
vSphere Networking, Chapter 3 “Setting Up Networking with vSphere Distributed Switches”, Section “Edit General Distributed Port Group Settings”, page 26.

Summary:
Port binding is available in the dvPortGroup Settings of vDS under the General Settings. The Port binding determines when and how the virtual NICs of a Virtual Machine is assigned a port in the port group. Port bindings are important because, it can cause VMs to lose network connectivity.

Figure 1

Three options are available:

  • Static binding
    The default
    A dvPort is permanently assigned to a vNIC (even if VM is powered off) when created. When ports are depleted no VMs can connect to the dvSwitch.
  • Dynamic binding
    A dvPort is assigned only when a VM is powered on, allows for port over commitment (more VMs than ports).
    Dynamic binding is deprecated in ESXi 5.0
  • Ephemeral (no binding)
    Similar to a standard vSwitch, a port is assigned to a VM connected (regardless of power state). Maximum is only the max # of ports on a dvSwitch!

Note: After you have created a new dvPortgroup, the port binding is Static by Default. You must Edit the dvPortgroup settings afterwards.

One question remains, why should you choose Dynamic Binding over Ephemeral (no binding)?
This question is answered in this excellent post: Why use Static Port Binding on VDS ?

It makes clear why it is highly recommended to use the default Static binding.

Other references:

  • Another interesting read is VMware KB 1022312 “Choosing a port binding type”. Do not miss the section on a new vSphere 5 feature, called autoExoand, which allows a portgroup to expand automatically by a small predefined margin whenever the portgroup is about to run out of ports.

Configure Live Port Moving

Official Documentation:
vSphere Networking, still in the Index… Was also described in the ESX Configuration Guide for ESX 4.0 and vCenter Server 4.0. Only problem, the setting as described is not in the software.

Summary:
There is a lot of discussion about this topic. Search in the VMware Communities forum.
This post describes Live Port Moving as:  “Transfer stand-alone port groups to distributed port groups, assigning settings associated with distributed port group to the stand-alone group”.

I welcome comments and examples on this topic.

Other references:

  • A

Given a set of network requirements, identify the appropriate distributed switch technology to use

Official Documentation: None
Summary:
Besides the well known VMware virtual Distributed Switch, vSphere also supports 3rd party vDS. Best known example is the Cisco Nexus 1000v.

The best reason I can think of to choose for a Cisco Nexus 1000v is in large enterprises where the management of firewalls, core- and access switches is in the domain of the Network administrators.
While the management of the VMware virtual Distributed Switch is in the domain of the vSphere Administrators, with a Cisco Nexus 1000v it is possible to completely separate the management of the virtual switches and hand-over to the network administrators. All this without allowing access to the rest of the vSphere platform to the Network administrators.

Other references:

Configure and administer vSphere Network I/O Control

Official Documentation:
vSphere Networking, Chapter 4 “Managing Network Resources”, Section “vSphere Network I/O Control”, page 35

Summary:
vSphere Network I/O Control (NIOC) was introduced in vSphere 4.x.
Network resource pools determine the bandwidth that different network traffic types are given on a vSphere distributed switch.

When network I/O control is enabled, distributed switch traffic is divided into the following predefined network resource pools:

  • Fault Tolerance traffic,
  • iSCSI traffic,
  • vMotion traffic,
  • management traffic,
  • vSphere Replication (VR) traffic,
  • NFS traffic,
  • virtual machine traffic.

In vSphere 5 NIOC a new feature is introduced: user-defined network resource pools. With these you can control the bandwidth each network resource pool is given by setting the physical adapter shares and host limit for each network resource pool.

Also new is the QoS priority tag . Assigning a QoS priority tag to a network resource pool applies an 802.1p tag to all outgoing packets associated with that network resource pool.

Requirements for NIOC:

  • Enterprise Plus license
  • Use vDS

Typical steps for NIOC:

  • NIOC is disabled by default. You have to enable it first.
    Select the vDS, Tab Resource Allocation and Properties..

Figure 2

  • Place a tick and you are done

Figure 3

  • Create a Network Resource Pool. Select New Network Resource Pool...

Figure 4

  • Provide a logical name. Under Resource Allocation, select the Physical Adapter shares. Options are: High, Normal, Low or a Custom value.
    If your physical network is configured for QoS priority tagging, select the value.
  • The final step is to associate a portgroup with the newly created Network Resource Pool. Select Manage Port Groups...

Figure 5

  • Under “Network resource pool”, select the desired value. Changes during a session will show a “Yes” during a session.
  • Back to the overview. Select a User-defined network resource-pool. Press the Port groups button in the details section to see the configured Portgroups.

Figure 6

That’s all.

Other references:

Use command line tools to troubleshoot and identify configuration items from an existing vDS

Official Documentation:

Summary:
See also objective “Understand the use of command line tools to configure appropriate vDS settings on an ESXi host” in this section.

Note: a very useful command is: net-dvs.

Other references:

  • A
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: