Configure VMware ESXi 4.1 Networking

VMware ESXi 4.1 Networking

one of the most interesting parts of configuring a fresh ESXi host, is the Network. I mean, you have an ESXI host with 4 or more NIC’s, network based shared storage like iSCSI or NFS. Further you want to configure for redundancy, provide vMotion, FT, enable Cluster features like HA and DRS, etc, etc.

 

There are a few preconditions:

 

  • I leave aside the topic how to distribute on-board NICs and NICs on expansion cards (the ESXi server in this example is virtual)
  • Shared storage is iSCSI
  • Try to fit-in FT
  • Everything must be redundant, assume we do have two stacked switches for management, VMotion and LAN traffic and two stacked switches for iSCSI traffic. It is best practice to have separate switches for storage traffic. Switches have been configured correctly (trunk ports etc.)

 

The first part of the design is the Management network. There are two options to create a virtual switch, a vNetwork Standard Switch(vSS)or a vNetwork Distributed Switch (vDS). Besides the required license (Enterprise PLus), there is discussion if you should go for a 100% vDS solution or a hybrid approach (combination of vSS and vDS). However, for the Management network, we prefer a vSS. The other vSwitches can be a vSS or a vDS.

 

In a vSphere High Availability (HA) cluster, the “heartbeat” network plays a very important role and with that, the Management network in ESXi. NIC teaming provides redundancy. The preferred scenario is a Single Management Network with vmnics in Active/Standby configuration. It is also common practice to combine the Management network with the vMotion Network. This results in the following design.

 

 

Management Network
VLAN 2
Management Traffic is Enabled
vmk0: 192.168.2.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

 

vMotion
VLAN 21
vMotion is Enabled
vmk1: 192.168.21.53
vmnic1 Active / vmnic0 Standby
Load balancing: Use explicit failover order
Failback: No

 

Needless to say that vmnic0 is connected to the first switch in the stack and vmnic1 is connected to the second switch in the stack.

 

The second part of the design is the Storage network.Another recommendation from the HA and DRS Deepdive is avoiding the chances of a split-brain scenario. A split-brain situation can occur during a HA incident where a virtual machine is restarted on another host, while not being powered down on the original host. So for all network based storage, iSCSI included, it is recommended to create a secondary Management Network on the same vSwitch as the storage network to detect a storage outage.

 

As a Storage Adapter to connect to our iSCSI storage, we will use the iSCSI Software adapter. We configure two VMkernel NICs for redundancy and load balancing. Both VMkernel NICs will be assigned to the iSCSI Software adapter.

 

For iSCSI networks, it is also recommended to enable Jumbo frames. For more information on Jumbo frames see this link.
Unfortunately in vSphere 4.1 it is not possible to use the vSphere Client to create a virtual switch with Jumbo frames enabled. You will have to  use a CLI. In this example I used the vSphere Management Assistant (vMA)

 

 

iSCSI1
VLAN
vmk2: 192.168.50.53
vmnic2 Active / vmnic3 Unused

 

iSCSI2
VLAN
vmk3: 192.168.50.63
vmnic3 Active / vmnic2 Unused

 

Management Network2
VLAN
Management Traffic is Enabled
vmk4: 192.168.50.73
vmnic2 Active / vmnic3 Active
Load balancing: Use explicit failover order
Failback: No

 

The actual code. I suppose that the iSCSI Software adapter has been set up already, this can be done with the vSphere Client.

 

# vSwitch1
# -a add new vSwitch
/usr/bin/vicfg-vswitch -a vSwitch1

 

# -m set MTU value
/usr/bin/vicfg-vswitch vSwitch1 -m 9000

 

# -A add portgroup
/usr/bin/vicfg-vswitch -A iSCSI1 vSwitch1
/usr/bin/vicfg-vswitch -A iSCSI2 vSwitch1

 

# -a add VMkernel nic, requires -i IP address, -n Netmask and -p Portgroup. -m set MTU is optional
/usr/bin/vicfg-vmknic -a -i 192.168.50.53 -n 255.255.255.0 -m 9000 -p iSCSI1

 

/usr/bin/vicfg-vmknic -a -i 192.168.50.63 -n 255.255.255.0 -m 9000 -p iSCSI2

 

# -L bind physical NIC to vSwitch
/usr/bin/vicfg-vswitch vSwitch1 -L vmnic2
/usr/bin/vicfg-vswitch vSwitch1 -L vmnic3

 

# -N unlink physical NIC from portgroup
/usr/bin/vicfg-vswitch -p iSCSI1 -N vmnic3 vSwitch1
/usr/bin/vicfg-vswitch -p iSCSI2 -N vmnic2 vSwitch1

 

# add 2nd management portgroup
/usr/bin/vicfg-vswitch -A “Management Network2″ vSwitch1

 

# configure VMkernel nic
/usr/bin/vicfg-vmknic -a -i 192.168.50.73 -n 255.255.255.0 -m 9000 -p “Management Network2″

 

# To bind the virtual nics to the Software iSCSI adapter
esxcli –-server= –username=root –-password= swiscsi nic add –n vmk2 -d

 

esxcli –-server= –username=root –-password= swiscsi nic add –n vmk3 –d

 

# To check the Software iSCSI adapter
esxcli –-server= –username=root –-password= swiscsi nic list

 

An extra recommendation from the HA and DRS Deepdive is to specify an additional Isolation address. In case you are using network based storage like iSCSI, a good choice is the IP address of the storage device, in this example: 192.168.50.11

 

 

The third and final part is the vSwitch for the Virtual Machine networks. In case your VMs run on VLANs, create a vSwitch and add a Port group for every VLAN. Label each portgroup with a name that reflects the VLAN. In this example 2 Port groups have been created. There are several options for the Loadbalancing policy. A  recommended way is the “Explicit failover order” instead of the default “originating virtual port id”

 

 

VM Network31
VLAN 31

 

VM Network32
VLAN 32

 

All adapter Active/Active
Load balancing: Use explicit failover order

 

What is missing in this design so far is Fault Tolerance (FT). It is recommended to have FT on a separate network. One possibility is to add two extra physical NICs and create an extra vSwitch. With 6 NICs, I consider this also possible

 

 

Management Network
VLAN 2
Management Traffic is Enabled
vmk0: 192.168.2.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

 

vMotion
VLAN 21
vMotion is Enabled
vmk0: 192.168.21.53
vmnic1 Active / vmnic0 Standby
Load balancing: Use explicit failover order
Failback: No

 

FaultTolerance
VLAN 22
Fault Tolerance Logging is Enabled
vmk0: 192.168.22.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

 

 

Code
Author
Administrator
Date Created
2012-06-15 15:50:00
Date Updated
2012-06-15 15:53:09
Views
10895


Powered by MaQma HelpDesk