VMware iSCSI multipath (Round Robin) for Equallogic Part 2

Must read update for Equallogic multipath
vSphere5 multi-path walkthrough
Update: tag all Round Robin LUNs in one script

Earlier I posted an entry about manually creating what you need for iSCSI multipath on vSphere 4.1 with an Equallogic storage array.

In the comments a Dell guy pointed out that the Equallogic Multipathing Extension Module (MEM) installation utility could be used to setup multipath iSCSI even if you don’t have the vSphere Enterprise or Enterprise+ licensing that allows for 3rd party storage plugins.

Get the script

Log into support.equallogic.com and go to downloads/VMware Integration.
Click on Version 1.0.0 under “EqualLogic Multipathing Extension Module for VMware® vSphere” then click “EqualLogic Multipathing Extension Module.” Note: don’t bother downloading the user manual as it’s included in the .zip file.

The file you want is the setup.pl that in the folder of the downloaded zip. The MEM manual suggests using vMA, you could also use the VMware CLI (the perl-based remote CLI not he Powershell one).
vMA is a good appliance to have, you can get it from here. If you are not familiar with vMA read the Guide also available at that link.

Put the script where you can run it
To copy setup.pl to the vMA use a scp utility such as WinSCP. Using winscp, connect to the vMA using the ip address assigned to the appliance. The default user name for the vMA appliance is “vi-admin” and the appliance makes you set a password the first time you turn it on.
Drag and drop setup.pl into the vMA (by default you you are connected to the home directory of vi-admin).

While still in winscp, once the script is copied select it, right-click and choose properties. You’ll want to set the permissions to eXecute or you will not be able to run it – one way is change the octal to 0777.

Run the script
Once setup.pl is copied to the vMA appliance and the properties have been changed, connect to the vMA using a SSH client such as putty or just use the vSphere client and open the console of the vMA.

Note: If you’re using the VMware CLI, just copy the setup.pl script to a directory on the machine with the CLI installed and run it there. My examples will be using the vMA but it works the same.

Put your selected host into maintenance mode (if you only have one host, use either the local CLI or run vMA in VMware Workstation which is what I do).
Make a note of the nics you’ll be configuring for iSCSI multipath as well as the IP addressed you’ll be using (you want one per NIC).
execute the setup script using:

[vi-admin@vMA ~]$ ./setup.pl

if you don’t include any parameters you will get a list of available options.

To start the configuration, add the parameter for the server to be configured (hostname or IP) and the script will walk you through the rest:

[vi-admin@vMA ~]$ ./setup.pl –configure –server=
Use of the vMA fastpass is recommended, see the ‘vifp’ command for more information.
You must provide the username and password for the server.
Enter username: root
Enter password:

Do you wish to use a standard vSwitch or a vNetwork Distributed Switch (vSwitch/vDS) [vSwitch]:
I left the default, vSwitch. I love vDS but lets keep this simple, shall we?

Found existing switches vSwitch0.
vSwitch Name [vSwitchISCSI]:
When in doubt, leave the default. Nice useful name too.

Which nics do you wish to use for iSCSI traffic? [vmnic1]: vmnic1,vmnic2
Here is where you enter the nics using commas to separate. Note the script lists the first unused nic by default.

IP address for vmknic using nic vmnic1:
IP address for vmknic using nic vmnic2:
Netmask for all vmknics []:
Remember, you want the iSCSI all on the same broadcast network and separate from other traffic for performance and security. You also do not want iSCSI traffic on the same broadcast domain as any other VMkernel traffic. VMware has 3 types of VMkernel ports (management, FT, vMotion) but 4 types of VMkernel traffic (same plus iSCSI). iSCSI traffic will use whatever VMkernel port it can to access the iSCSI targets, which may not be this fancy mutipath setup we are doing here!.

What MTU do you wish to use for iSCSI vSwitches and vmknics? Before increasing
the MTU, verify the setting is supported by your NICs and network switches. [1500]: 9000
yeah yeah, leave the defaults. Except here. Jumbo is the default on the Equallogics, just not on VMware. Don’t forget to set it on your switches also.

What prefix should be used when creating VMKernel Portgroups? [iSCSI]:

What PS Group IP address would you like to add as a Send Target discovery address (optional)?:
saves you from that 5 second step in the GUI!

Configuring iSCSI networking with following settings:
Using a standard vSwitch ‘vSwitchISCSI’
Using NICs ‘vmnic1,vmnic2’
Using IP addresses ‘,’
Using netmask ‘’
Using MTU ‘9000’
Using prefix ‘iSCSI’ for VMKernel Portgroups
Using SW iSCSI initiator
Adding PS Series Group IP ‘’ to Send Targets discovery list

The following command line can be used to perform this configuration:
/home/vi-admin/setup.pl –configure –server= –vswitch=vSwitchISCSI –mtu=9000 –nics=vmnic1,vmnic2 –ips=, –netmask= –vmkernel=iSCSI –nohwiscsi –groupip=

Nice summary, and you can copy that command for your documentation plus edit it a little then run it on any other server. Nice touch E.

Do you wish to proceed with configuration? [yes]:

Configuring networking for iSCSI multipathing:
vswitch = vSwitchISCSI
mtu = 9000
nics = vmnic1 vmnic2
ips =
netmask =
vmkernel = iSCSI
nohwiscsi = 1
EQL group IP =
Creating vSwitch vSwitchISCSI.
Setting vSwitch MTU to 9000.
Creating portgroup iSCSI0 on vSwitch vSwitchISCSI.
Assigning IP address to iSCSI0.
Creating portgroup iSCSI1 on vSwitch vSwitchISCSI.
Assigning IP address to iSCSI1.
Creating new bridge.
Adding uplink vmnic1 to vSwitchISCSI.
Adding uplink vmnic2 to vSwitchISCSI.
Setting new uplinks for vSwitchISCSI.
Setting uplink for iSCSI0 to vmnic1.
Setting uplink for iSCSI1 to vmnic2.
Bound vmk2 to vmhba33.
Bound vmk3 to vmhba33.
Refreshing host storage system.
Adding discovery address to storage adapter vmhba33.
Rescanning all HBAs.
Network configuration finished successfully.
No Dell EqualLogic Multipathing Extension Module found.
Continue your setup by installing the module with the –install option or through vCenter Update Manager.
[vi-admin@vMA ~]$


Pretty nifty.
As promised, it setup my switch, added the NICs and VMkernel ports:

and configured the one-to-one nic/vmkernel port relationship

and added the iSCSI target.

Works as advertised and the price is right!

Remember to reboot your host, take it out of maintenance mode and test before throwing all your production machines on it.

This entry was posted in Computing, Equallogic, Scripting, Storage, Virtualization, VMware and tagged , , , , , , , , . Bookmark the permalink.

14 Responses to VMware iSCSI multipath (Round Robin) for Equallogic Part 2

  1. d_glynn says:

    Glad you like it. Keep up the blogging!

  2. Pingback: Tweets that mention VMware iSCSI multipath (Round Robin) for Equallogic Part 2 | SOS tech -- Topsy.com

  3. vm newbie says:

    great post. did you have the test result from using vmware natvie round robin and equallogic MEM to compare with? What I heard was equallogic MEM is true load balancing all active path (active/active) while round robin is just doing one path at any given time.

    • JAndrews says:

      I have not found the time to do the MEM test, to many changes in the VMware world right now. I’m hoping to get the VSA tested first, but the MEM is still on the list.

      Thanks for reading.

  4. ltcadman says:

    With this setup, how many paths does your datastore on the equallogic show? I only see two paths per volume, one from each of the first iSCSI NICs on each ESX hosts, but I do not see the second iSCSI NIC from either host making a connection in the equallogic. I have a two member equallogic with two volumes as extents for one datastore. ESX storage configuration also shows only two paths and I did enable roundrobin and rescanned. Also restarted host to no avail. When I go into manage paths on the datastore, I only see one path listed. I’ll try to changing the second host’s which is still set as “Fixed” path. Thanks

    • JAndrews says:

      Make sure you properly dedicated the two VMKernel ports to separate NICs and ensure any other NIC available to each VMKernel port is set to Unused.

      What version of ESX/ESXi are you using?

      • ltcadman says:

        I double checked settings following TR1049 “Configure vSphere SW iSCSI with PS Series SAN v1 2” and both NICs on each server have their own VMkernel with different IPs one set to unused and the one active. When this was setup initially, apparently both VMkernels set the second NIC to unused. I already corrected this and rescanned but show now difference. The NICs not having an active connection (when viewing volumes on Equallogic) is the second NIC from both host. ESX 4.1 U2 with latest updates. Equallogic (EL) also has latest firmware. Under the iSCSI software adapter on ESX, in the dynamic discovery tab lists the EL group’s IP and static discovery shows 2 targets, namely the EL group’s IP with targets showing two volume’s iqn.

        • JAndrews says:

          Can you put screen shots up somewhere?
          Sounds like it wasn’t configured right and a rescan should have shown the extra paths after the fix.
          Both kernels are on the same switch, same VLAN ID, same IP domain, same subnet mask?
          Jumbo on or off? If this isn’t prod can we disable the NIC or delete the VMkernel currently in use and see if the other can connect?

  5. jgray205 says:

    With ESXi 4.1 you have to manually bind each iSCSI VMKernel port to the software iSCSI adapter suing the CLI – Example: Connecting iSCSI Ports to the Software iSCSI Adapter
    This example shows how to connect the iSCSI ports vmk1 and vmk2 to the software iSCSI adapter vmhba33.
    1 Connect vmk1 to vmhba33: esxcli swiscsi nic add -n vmk1 -d vmhba33.
    2 Connect vmk2 to vmhba33: esxcli swiscsi nic add -n vmk2 -d vmhba33.
    3 Verify vmhba33 configuration: esxcli swiscsi nic list -d vmhba33.
    Both vmk1 and vmk2 should be listed.
    See pape 40 of the ESX 4.1 iSCSI SAN Configuration Guide. Hope this helps

  6. mhoward says:

    So going back to VMware licensing, if they require having the Enterprise version in order to leverage Storage APIs for Array Integration, Multipathing is VMware truly leveraging the MEM plugin? If so, does VMware support this type of configuration? It seems odd to me that you could install a third party plugin without being licensed properly.

  7. Don Willilams says:

    One thing to remember is w/o MEM you need to change all the pathing for EQL volumes to VMware Round Robin AND change the IOs per path from 1000 to 3. This will enhance the performance. This script will do that for you:

    Solution Title HOWTO: Change IOPs value // Round Robin for MPIO in ESXi v5.x
    Solution Details This is a script you can run to set all EQL volumes to Round Robin and set the IOPs value to 3.

    esxcli storage nmp satp set –default-psp=VMW_PSP_RR –satp=VMW_SATP_EQL ; for i in `esxcli storage nmp device list | grep EQLOGIC|awk ‘{print $7}’|sed ‘s/(//g’|sed ‘s/)//g’` ; do esxcli storage nmp device set -d $i –psp=VMW_PSP_RR ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 3 -t iops ; done

    After you run the script you should verify that the changes took effect.
    #esxcli storage nmp device list

    **New volumes will still need to have the IOPs value changed.


  8. Don Willilams says:

    The script I posted is for ESXi v5.0 or v5.1. ESX v4.x needs a different script. If some one wants it I can post that as well.

  9. Kyle says:

    Hello Don,
    When I run your script on my test EQL I receive the following:

    Error: Unknown command or namespace storage nmp satp set .default-psp=VMW_PSP_RR .satp=VMW_SATP_EQL

    awk: cmd. line:1: Unexpected token
    sed: unsupported command .

Leave a Reply to Don Willilams Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.