- Configure SNMP
- Determine use cases for and applying VMware DirectPath I/O
- Migrate a vSS network to a Hybrid or Full vDS solution
- Configure vSS and vDS settings using command line tools
- Analyze command line output to identify vSS and vDS configuration details
- Configure NetFlow
- Determine appropriate discovery protocol
vSphere Networking, Chapter 8 “Monitoring Networked Devices with SNMP and vSphere”, page 63
For more info on SNMP, see this Wikipedia article.
SNMP in a few words: Simple Network Management Protocol (SNMP) is an “Internet-standard protocol for managing devices on IP networks.”
An SNMP-managed network consists of three key components:
- Managed device
- Agent, software which runs on managed devices
- Network management system (NMS), software which runs on the manager
A managed device is a network node that implements an SNMP interface that allows unidirectional (read-only) or bidirectional access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, including, but not limited to, routers, access servers, switches, bridges, hubs, IP telephones, IP video cameras, computer hosts, and printers.
An agent is a network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP specific form. The most common ways for information exchange are:
- Responding to a GET operation, which is a specific request for information from the NMS (initiated by the NMS)
- By sending a trap, which is an alert sent by the SNMP agent to notify the management system of a particular event or condition (initiated by the Agent)
A network management system (NMS) executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network.
Management Information Base (MIB) files define the information that can be provided by managed devices. The MIB files contain object identifiers (OIDs) and variables arranged in a hierarchy.
And here VMware vSphere comes in: the vCenter Server and ESXi both have SNMP agents, each with different capabilities.
From the documentation: “The SNMP agent included with vCenter Server can be used to send traps when the vCenter Server system is started and when an alarm is triggered on vCenter Server. The vCenter Server SNMP agent functions only as a trap emitter and does not support other SNMP operations, such as GET.”
To use SNMP with vCenter, use the vSphere Client and in the menu, go to Administration and vCenter Server Settings.
Under SNMP setting, the Primary Receiver (network management system (NMS)) is enabled by default. Provide the following information:
- Receiver URL: DNS name or IP of the NMS;
- The port number of the receiver, 162 is very common;
- The Community string. Re-using “Public” is not a very good idea.
If needed, up to 3 additional receivers can be configured.
ESXi includes an SNMP agent embedded in hostd that can both send traps and receive polling requests such as GET requests.
The SNMP agent is disabled by default. You need the vSphere CLI or vMA to enable and configure the SNMP agent. Unfortunately, these settings are not available under the “Advanced Settings” section in the vSphere Client.
Synopsis: /usr/bin/vicfg-snmp OPTIONS Command-specific options: --communities -c Set communities separated by comma comm1[,...] (this overwrites previous settings) --disable -D Stop SNMP service --enable -E Start SNMP service --hwsrc -y Where to source hardware events from IPMI sensors or CIM Indications. One of: indications|sensors --notraps -n Comma separated list of trap oids for traps not to be sent by agent. Use value 'reset' to clear setting --port -p Sets the port of the snmp agent. The default is udp/162 --reset -r Return agent configuration to factory defaults --show -s Displays snmp agent configuration --targets -t Set destination of notifications(traps) hostname[@port][/community][,...] (this overwrites previous settings) (IPv6 address valid for vSphere 4.0 and later) --test -T Send out a test notification to validate configuration --vihost -h The host to use when connecting via a vCenter Server.
To show the current settings of the SNMP Agent:
vi-admin@vma5:~[ml110g5]> vicfg-snmp -s Current SNMP agent settings: Enabled : 0 UDP port : 161 Communities : Notification targets : Options : EnvEventSource=indications
To set a community string, named “PublicVirtual”:
vi-admin@vma5:~[ml110g5]> vicfg-snmp -c PublicVirtual Changing community list to: PublicVirtual... Complete.
To send Traps to nma.virtual.local, using port 162 and community “PublicVirtual”
vi-admin@vma5:~[ml110g5]> vicfg-snmp -t nma.virtual.local@162/PublicVirtual Changing notification(trap) targets list to: nma.virtual.local@162/PublicVirtual... Complete.
The SNMP Agent is still disabled. To enable the agent:
vi-admin@vma5:~[ml110g5]> vicfg-snmp -E Enabling agent... Complete.
This is not the whole story. You also need to configure the NMA. You have options to filter out certain traps and there is an option to send out Traps for testing purposes. Chapter 8 also presents an extended overview of the MIBs.
- VMware KB 1008065 “Configuring SNMP Traps for ESXi/ESX 3.5, 4.x, and 5.0”
Determine use cases for and applying VMware DirectPath I/O
This subject has also been covered in Objective 1.1
Migrate a vSS network to a Hybrid or Full vDS solution
vSphere Networking, Chapter 2 and Chapter 3 contain a lot of information on setting up vSphere Standard Switches and vSphere Distributed Switches, but no specific information on this objective.
Recommended reading on this subject are these documents:
- “VMware vNetwork Distributed Switch: Migration and Configuration”. This Whitepaper, released during the vSphere 4.x era, is intended to help migrating from an environment with vSS to one using vDS. It discusses possible scenarios and provides step-by-step examples how to migrate.
- “VMware vSphere 4: Deployment Methods for the VMware vNetwork Distributed Switch”. This paper discusses and suggests the most effective methods of deployment for the VMware vNetwork Distributed Switch (vDS) in a variety of vSphere 4 environments. It also has a chapter on choosing a method for migration to a vDS.
- “VMware vSphere Distributed Switch Best Practices”. This vSphere 5.x whitepaper describes two example deployments, one using rack servers and the other using blade servers. For each of these deployments, di!erent VDS design approaches are explained. The deployments and design approaches described in this document are meant to provide guidance as to what physical and virtual switch parameters, options and features should be considered during the design of a virtual network infrastructure.
Migrate Virtual Machine Portgroups
One option is to use the “Migrate Virtual machine Networking..” Wizard. RC on a dVS.
Select the Source and Destination Network:
And select the VMs on the Source network that you want to migrate.
Migrate VMKernel Ports
Go to Configuration, Hardware, Networking. Select Distribute Switches
From here select Manage Virtual Adapters…
Select Migrate existing virtual adapter.
Select the Adapters to Migrate and Complete the process.
Configure vSS and vDS settings using command line tools
Summary: In fact, VMware offers two completely different CLI’s with options to configure and analyze output of vSS and vDS.
- The VMware vSphere CLI, available on a ESXi host, as an installable package on a Windows or Linux Client or as part of the VMware Management Assistant (vMA)
- The VMware vSphere PowerCLI, available on any client that supports Microsoft’s Powershell
vSphere CLI commands related to the vSS and vDS are:
- # esxcli network namespace (now works with FastPass on the vMA)
- # vicfg-vmknic
- # vicfg-vswitch
- # net-dvs (only on a ESXi host)
- # vicfg-nics
- # vicfg-route
- # vmkping (only on a ESXi host)
- Note: on a ESXi hosts, commands starting with vicfg- should be replaced by: esxcfg-.
The concept behind the Microsoft PowerShell is somewhat different. If you haven’t done already, it is certainly worth investing some time learning PowerShell.
PowerShell is a very powerful Command shell and more and more vendors are adding extensions (Cmdlets) to it, like VMware, Veeam and many others.
Concerning Virtual Networking, four categories are available:
- Ivo Beerens has put together a nice CLI cheat sheet.
- The complete vSphere Command Line documentation is here.
- The complete vSphere PowerCLI documentation is here.
Analyze command line output to identify vSS and vDS configuration details
Summary: See previous objective
- See previous objective
vSphere Networking, Chapter 6 “Advanced Networking”, section “Configure NetFlow”, page 70.
NetFlow is a network analysis tool that you can use to monitor network monitoring and virtual machine traffic. NetFlow is available on vSphere distributed switch version 5.0.0 and later. The official documentation describes the steps to configure NetFlow
NetFlow is enabled on the vDS level.
Most important settings:
- the VDS IP address.
With an IP address to the vSphere distributed switch, the NetFlow collector can interact with the vSphere distributed switch as a single switch, rather than interacting with a separate, unrelated switch for each associated host.
- the Sampling rate.
The sampling rate determines what portion of data NetFlow collects, with the sampling rate number determining how often NetFlow collects the packets. A collector with a sampling rate of 2 collects data from every other packet. A collector with a sampling rate of 5 collects data from every fifth packet.
0 means no sampling!
- Process Internal Flows only, if you want to analyse traffic between 2 or more VMs.
Netflow needs to be enabled on the DVUplinks layer and/or on the dvPortGroup layer. On both levels an override of the port policies is allowed.
- Eric Sloof has made a great video on Enabling NetFlow on a vSphere 5 Distributed Switch
Determine appropriate discovery protocol
vSphere Networking, Chapter 6 “Advanced Networking”, section “Switch Discovery Protocol”, page 70.
Since vSphere 5, two switch discovery protocols are now supported. A Switch discovery protocols allow vSphere administrators to determine which switch port is connected to a given vSphere standard switch or vSphere distributed switch. When a Switch Discovery Protocol is enabled for a particular vSphere distributed switch or vSphere standard switch, you can view properties of the peer physical switch such as device ID, software version, and timeout from the vSphere Client
- Cisco Discovery Protocol (CDP) is available for vSphere standard switches and vSphere distributed switches connected to Cisco physical switches (and other switches which support CDP).
- Link Layer Discovery Protocol (LLDP) is available for vSphere distributed switches version 5.0.0 and later. LLDP is vendor neutral and can be seen as the successor of CDP.
The BIG question, which switch discovery protocol do we use? depends imho on:
- Which switch discovery protocols are supported by the connected physical switches?
- Do we want to enable a switch discovery protocol on a vSS or a vDS?
- Which output do we want?
- Wikipedia on CDP.
- Wikipedia on LLDP.
- Jason Boche discussing LLDP.
- Rickard Nobel on troubleshooting ESXi with LLDP.
vSphere Networking, Chapter 6 “Advanced Networking”, section “Switch Discovery Protocol”, page 71.
On a vSS, CDP is enabled by default. To change the settings, you need the vSphere CLI command: vicfg-vswitch or esxcli.
# vicfg-vswitch –b, for the actual status
# vicfg-vswitch –B, to change the settings.
The vDS is configured using the vSphere Client.
See VMware KB 1003885 “Configuring the Cisco Discovery Protocol (CDP) with ESX” for detailed instructions on configuring CDP on a vSS, vDS and a Cisco physical switch.
Important to know, there are 3 modes available:
- Listen mode – The ESXi/ESX host detects and displays information about the associated Cisco switch port, but information about the vSwitch is not available to the Cisco switch administrator.
- Advertise mode – The ESXi/ESX host makes information about the vSwitch available to the Cisco switch administrator, but does not detect and dispay information about the Cisco switch.
- Both mode – The ESXi/ESX host detects and displays information about the associated Cisco switch and makes information about the vSwitch available to the Cisco switch administrator.
- See VMware KB 1007069 “Cisco Discovery Protocol (CDP) Network information” contains a section how to obtain information, using the PowerCLI and the vSphere CLI. The esxcli command can also be used
vSphere Networking, Chapter 6 “Advanced Networking”, section “Switch Discovery Protocol”, page 72.
See also previous objective on CDP. LLDP is only available on a vDS.