Configure VMware ESXi 4.1 Networking

In my humble opinion, one of the most interesting parts of configuring a fresh ESXi host, is the Network. I mean, you have an ESXI host with 4 or more NIC’s, network based shared storage like iSCSI or NFS. Further you want to configure for redundancy, provide vMotion, FT, enable Cluster features like HA and DRS, etc, etc.

Some time ago, I stumbled on this excellent blog post by Kendric Coleman. In this post he presents several scenarios how to design a vSphere host equipped with six physical NICs.

Recently, I have read a great book entitled “VMware vSphere 4.1 HA and DRS Technical deepdive”. This book is written by Duncan Epping and Frank Denneman and explains the principles behind HA, DRS and DPM. The book also offers very useful design principles, while configuring for HA and DRS. This book is a must read for every serious VMware Admin out there. What is more, even before the GA of vSphere 5, the successor “VMware vSphere 5 Clustering technical Deepdive” is already available (in fact is already on my iPad).
Especially in the first part, about VMware High Availability, they wrote down a few interesting design principles concerning network configuration.

In this post, I will try to combine both sources and create a design for an ESXi server with 6 NICs.

There are a few preconditions:

  • I leave aside the topic how to distribute on-board NICs and NICs on expansion cards (the ESXi server in this example is virtual)
  • Shared storage is iSCSI
  • Try to fit-in FT
  • Everything must be redundant, assume we do have two stacked switches for management, VMotion and LAN traffic and two stacked switches for iSCSI traffic. It is best practice to have separate switches for storage traffic. Switches have been configured correctly (trunk ports etc.)

The first part of the design is the Management network. There are two options to create a virtual switch, a vNetwork Standard Switch (vSS)or a vNetwork Distributed Switch (vDS). Besides the required license (Enterprise PLus), there is discussion if you should go for a 100% vDS solution or a hybrid approach (combination of vSS and vDS). However, for the Management network, we prefer a vSS. The other vSwitches can be a vSS or a vDS.

In a vSphere High Availability (HA) cluster, the “heartbeat” network plays a very important role and with that, the Management network in ESXi. NIC teaming provides redundancy. The preferred scenario is a Single Management Network with vmnics in Active/Standby configuration. It is also common practice to combine the Management network with the vMotion Network. This results in the following design.

Management Network
VLAN 2
Management Traffic is Enabled
vmk0: 192.168.2.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

vMotion
VLAN 21
vMotion is Enabled
vmk1: 192.168.21.53
vmnic1 Active / vmnic0 Standby
Load balancing: Use explicit failover order
Failback: No

Needless to say that vmnic0 is connected to the first switch in the stack and vmnic1 is connected to the second switch in the stack.

The second part of the design is the Storage network.Another recommendation from the HA and DRS Deepdive is avoiding the chances of a split-brain scenario. A split-brain situation can occur during a HA incident where a virtual machine is restarted on another host, while not being powered down on the original host. So for all network based storage, iSCSI included, it is recommended to create a secondary Management Network on the same vSwitch as the storage network to detect a storage outage.

As a Storage Adapter to connect to our iSCSI storage, we will use the iSCSI Software adapter. We configure two VMkernel NICs for redundancy and load balancing. Both VMkernel NICs will be assigned to the iSCSI Software adapter.

For iSCSI networks, it is also recommended to enable Jumbo frames. For more information on Jumbo frames see this link.
Unfortunately in vSphere 4.1 it is not possible to use the vSphere Client to create a virtual switch with Jumbo frames enabled. You will have to  use a CLI. In this example I used the vSphere Management Assistant (vMA)

iSCSI1
VLAN
vmk2: 192.168.50.53
vmnic2 Active / vmnic3 Unused

iSCSI2
VLAN
vmk3: 192.168.50.63
vmnic3 Active / vmnic2 Unused

Management Network2
VLAN
Management Traffic is Enabled
vmk4: 192.168.50.73
vmnic2 Active / vmnic3 Active
Load balancing: Use explicit failover order
Failback: No

The actual code. I suppose that the iSCSI Software adapter has been set up already, this can be done with the vSphere Client.

# vSwitch1
# -a add new vSwitch
/usr/bin/vicfg-vswitch -a vSwitch1

# -m set MTU value
/usr/bin/vicfg-vswitch vSwitch1 -m 9000

# -A add portgroup
/usr/bin/vicfg-vswitch -A iSCSI1 vSwitch1
/usr/bin/vicfg-vswitch -A iSCSI2 vSwitch1

# -a add VMkernel nic, requires -i IP address, -n Netmask and -p Portgroup. -m set MTU is optional
/usr/bin/vicfg-vmknic -a -i 192.168.50.53 -n 255.255.255.0 -m 9000 -p iSCSI1

/usr/bin/vicfg-vmknic -a -i 192.168.50.63 -n 255.255.255.0 -m 9000 -p iSCSI2

# -L bind physical NIC to vSwitch
/usr/bin/vicfg-vswitch vSwitch1 -L vmnic2
/usr/bin/vicfg-vswitch vSwitch1 -L vmnic3

# -N unlink physical NIC from portgroup
/usr/bin/vicfg-vswitch -p iSCSI1 -N vmnic3 vSwitch1
/usr/bin/vicfg-vswitch -p iSCSI2 -N vmnic2 vSwitch1

# add 2nd management portgroup
/usr/bin/vicfg-vswitch -A “Management Network2” vSwitch1

# configure VMkernel nic
/usr/bin/vicfg-vmknic -a -i 192.168.50.73 -n 255.255.255.0 -m 9000 -p “Management Network2”

# To bind the virtual nics to the Software iSCSI adapter
esxcli –-server=<esxi host> –username=root –-password=<password> swiscsi nic add –n vmk2 -d <virtualHBA>

esxcli –-server=<esxi host> –username=root –-password=<password> swiscsi nic add –n vmk3 –d <virtualHBA>

# To check the Software iSCSI adapter
esxcli –-server=<esxi host> –username=root –-password=<password> swiscsi nic list <virtualHBA>

An extra recommendation from the HA and DRS Deepdive is to specify an additional Isolation address. In case you are using network based storage like iSCSI, a good choice is the IP address of the storage device, in this example: 192.168.50.11

The third and final part is the vSwitch for the Virtual Machine networks. In case your VMs run on VLANs, create a vSwitch and add a Port group for every VLAN. Label each portgroup with a name that reflects the VLAN. In this example 2 Port groups have been created. There are several options for the Loadbalancing policy. A  recommended way is the “Explicit failover order” instead of the default “originating virtual port id”

VM Network31
VLAN 31

VM Network32
VLAN 32

All adapter Active/Active
Load balancing: Use explicit failover order

What is missing in this design so far is Fault Tolerance (FT). It is recommended to have FT on a separate network. One possibility is to add two extra physical NICs and create an extra vSwitch. With 6 NICs, I consider this also possible

Management Network
VLAN 2
Management Traffic is Enabled
vmk0: 192.168.2.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

vMotion
VLAN 21
vMotion is Enabled
vmk0: 192.168.21.53
vmnic1 Active / vmnic0 Standby
Load balancing: Use explicit failover order
Failback: No

FaultTolerance
VLAN 22
Fault Tolerance Logging is Enabled
vmk0: 192.168.22.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

This is my design so far. I welcome your ideas, suggestions and other feedback. And please do not shoot me. Thank you very much for your attention.

Updated: 29-7-2011, minor corrections and stupid mistake concerning network redundancy, sorry.

4 thoughts on “Configure VMware ESXi 4.1 Networking

  1. Dana White 27/01/2012 / 00:15

    First off I would like to say this post has saved me considerable time in scripting my servers. The only issue I had with it was “Management Traffic is Enabled”. When I used the commands in the article it did not actually enable (check the box) for Management Traffic. Is there a command that can set what is necessary for the box to be checked for Managment Traffic Enabled? I found the one for Enabling VMotion to be
    vicfg-vmknic.pl -E -p . While I have your ear, maybe the command to enable the Fault Tolerance logging too.

  2. George Gutis 21/02/2013 / 04:20

    I got a doubt with this design: why the iscsi network needs a man network? If so, which is its purpose?

    • paulgrevink 22/02/2013 / 20:08

      Hello George,

      Thanks for your feedback. As the title of this post shows, it was originally written during the vSphere 4.1 era.
      One of many recommendations of the “VMware vSphere 4.1 HA and DRS Technical deepdive” was to have a secondary service console (ESX) or Management network (ESXi)
      running on the same vSwitch as the storage network (iSCSI or NFS) to detect a storage outage and avoid false positives for Isolation detection.
      Already in the successor “VMware vSphere 5 Clustering technical deepdive”, this recommendation (to avoid a split brain scenario) was skipped.
      So as from vSphere 5.x, I would not include this second management network in my design.

      Kind Regards,

      Paul

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.