VMware iSCSI Multipath (vSphere5)

Update: tag all Round Robin LUNs in one script

I’ve posted about Multipath twice before: manual command-line and using the Equallogic script.

With vSphere5 you can now do it all from the GUI tho admittedly with many more steps.

Disclaimer: Check with your storage vendor for best practices and methods. This is based on Equallogic’s best practices for vSphere 4.x – I’ll update as new info comes out.

Note: The point of this is to give the iSCSI initiator two paths to choose from. Creating one VMkernel port and putting it in a vSwitch with multiple nics gives it one fault-tolerant path, or (with IP hash) allows the network stack to make NIC decisions instead of the iSCSI initiator making path decisions.

Start out by adding a VMkernel connection.

If you don’t already have an switch to use, create one now and pick the NICs to use, otherwise select an existing switch.

Enter a name for the VMkernel port, keeping in mind this will be the first of two you will create.
Best practices have you separating traffic, so don’t choose vMotion, FT or management for this VMkernel port.

Enter an IP address for the port. Note that the VMkernel has only one default gateway that is shared by all VMkernel ports. For best results put the iSCSI VMkernel ports and targets on the same network, separate from management.

Verify your choices and click Finish.

To add the second VMkernel port for iSCSI multipath, select the switch you added the first NIC to (note that another option is to create a new switch for the second VMkernel port) and click properties.

Click Add and walk through the same wizard you used to create the first VMkernel port (note you won’t be prompted for switch/NIC since you selected the switch already).
When finished you’ll have the new switch with two NICs.

Now, go back into switch properties and go to the properties of each VMkernel port. On the last tab, select “Override switch failover order:” and set only one NIC to be Active, any other move to Unused. You will want each VMkernel port set to use a different NIC.

New for vSphere 5
Up to this point, the process is no different than using the GUI in 3.x/4.x.
Now things get fun.

You may want to take the time now to set the MTU for the VMkernel ports and switch to 9000 – assuming the rest of the connecting physical switches and the iSCSI target are all set to allow Jumbo frames (check with your switch and target vendors)
You set Switch and VMkernel port Jumbo aka MTU = 9000 on their respective properties window.

To add the iSCSI initiator (no longer added by default) go to Storage Adapters and click “Add…”

Click OK twice

Open the properties of the newly added (and enabled) iSCSI Initiator and select the Network Configuration tab.

Click Add. If you do not see any Port Group/VMkernel Adapters to add then revisit the step before the “New for vSphere5″ section – you have not assigned one and only one NIC to your intended iSCSI-dedicated VMkernel ports.

What you should see is the Port Group/VMkernel adapters you created earlier listed. (note: a “VMkernel Port” is a port group with only one connection – a virtual nic for the VMkernel. This is more apparent from the command-line when it is a two-step process to create a VMkernel Port)

Select one of the previously added VMkernel Ports (Port Group/VMkernel Adapter) and click Ok. Repeat to add both adapters. You should see both listed in the Network Configuration tab.

Note that if you return to the properties of each VMkernel port you should see the iSCSI Port Binding property checked.

Once you added the proper iSCSI target and configured said target, you should see two paths listed for the LUN/volumes presented. To turn on Round Robin Multipathing, enter the properties of each datastore and choose “Manage Paths”

Pick “Round Robin” from the Path Selection drop down and presto! iSCSI Multipathing.

This entry was posted in Cloud, Computing, Storage, Virtualization, VMware and tagged , , , , , . Bookmark the permalink.

5 Responses to VMware iSCSI Multipath (vSphere5)

  1. TooMeeK says:

    I see round-robin here. What is performance with 2 NICs only?
    For Gigabit network and short cable latency is 0,8 ms only (tested). But don’t know with round-robin..

  2. JAndrews says:

    RoundRobin simply chooses which path storage I/Os will take to the datastore. Wire performance is not affected.

  3. david says:

    can you add 4 nics to the group or are you limited to 2?

  4. Pingback: Week 3 | Chris' PLE

Leave a Reply