Now I’m back with a twist. Seems Equallogic published a white paper back in November about MPIO with vSphere5 (note that 4.x has the same issue which VMware mentions but the NetApp doc doesn’t). VMware followed that with a KB article so here’s the 30-second version.
Setting up iSCSI MPIO with two VMkernel ports each with a dedicated NIC can have problems. Equallogic (unlike other iSCSI vendors which typically use the iSCSI nop commands) uses ICMP for iSCSI management such as login/logout. By default VMware uses the loweset-numbered vmkernel port on a particular network to respond to ICMP.
So in your Equallogic iSCSI setup, you’ll have vmk1 and vmk2 each configured with dedicated NICs. If the NIC for vmk1 goes down – no more ICMP and the Equallogic will have login and path-determination issues.
The fix is to set the first VMKernel port on the switch to use both NICs (ie like a normal VMkernel port), then set up two additional VMkernel ports like you would for iSCSI MPIO. Note that you will not do any further configuration with the lowest numbered port – it cannot be used for iSCSI binding and should not be included on the “access” list on the Equallogic.
VMkernel will respond to any pings to the second two VMkernel ports (vmk2 and 3 in the example) from the first one (vmk1), which will have redundant network connectivity. Note that since it is all on the same switch, losing both NICs will kill all iSCSI and ICMP returns.
Since the second two ports will have “iSCSI Port Binding” enabled iSCSI traffic will be dedicated to those two ports only. You will never see iSCSI traffic on the lowest numbered VMkernel port.