The challenge is to configure a FAS3240 for iSCSI and CIFS traffic, giving both the best performance possible.
The iSCSI network consists of two HP Procurve 2810s in a “stack” (which is what HP calls managing two switches from the same interface). The switches have three CAT6 cables in a “trunk” (which is what HP calls a 802.3ad link aggregation group) and currently has VLAN enabled and configured for the existing FAS2020 but that unit is being replaced at which point VLAN tagging will be removed from the switch.
The data network consists of two Enteresys SecureStack B2 switches in a “real” stack configuration.
The FAS3240 has two 1GB ports built-in and it was ordered with a quad-1GB NIC added to each head.
I’ll address this from one controller’s point of view, the second will just be a duplicate.
The FAS2040 will have 24 15k SAS disks attached to one head and 28 7200k SATA disks attached to the other head.
With size 1GB connections we have some choices to make. We will be running iSCSI traffic to the HP switches and CIFS traffic to the Enterasys. Each will need redundancy, to there goes 4 of our ports (two iSCSI, two CIFS).
Now, do we want to use the left over 2 both for iSCSI? or split 1-each? Certainly iSCSI could use the extra bandwidth – but would it actually be able to use it?
Since the HP switched are not a true stack, we cannot span switches with link aggregation. Our current two 1GB ports will be cabled each to a switch. If we add a second 1GB cable to one of the existing switches, those two can be linked for 2GB total (note that individual sessions are limited to 1GB, but one server could have two 1GB sessions running).
If we add the second 1GB cable to the other switch, those two can also be linked, giving us two linked pairs. However since we cannot span the switches with the linking, the filer will only be able to use one pair at a time. ie we will create two multi vifs then put them both in a single vif.
Four NICs for iSCSI Pro: no bandwidth loss if we lose the primary iSCSI switch
Four NICs for iSCSI Con: two NICs are standby-only unless the primary switch goes down
Three NICs for iSCSI Pro: extra NIC freed up for CIFS, 2GB bandwidth
Three NICs for iSCSI Con: reduced bandwidth (1GB) in the event the primary switch fails
The Enterasys switches are stacked, so we can take our two 1GB NICs and link them while having each connected to a different switch. Poof 2GB of bandwidth. If we had one of the extra NICs we could up it to 3GB! Sweet. No downsides from the CIFS side.
Configuration:
NetAPP:
#trademarked naming convention
hostname FAS3240A
#create a one-nic vif for the failover 1GB connection
ifgrp create multi iSCSI-1 e0b
#create the primary vif
ifgrp create multi iSCSI-2 e2a e2b
#create the parent iSCSI vif
ifgrp create single iSCSI iSCSI-1 iSCSI-2
#use the 2-nic VIF when it is up.
ifgrp favor iSCSI-2
#set the IP address for iSCSI
ifconfig iSCSI `hostname`-iSCSI netmask 255.255.255.0 mtusize 9000 partner iSCSI
#much easier config for CIFS since we can span the switch and link all the ports
ifgrp create multi CIFS e0a e2c e2d
ifconfig CIFS `hostname`-CIFS netmask 255.255.255.0 partner CIFS
HP:
trunk 15-16 Trk2 Trunk
(Trk1 is the switch interlink)
(don’t forget to enable JUMBO)
Enterasys:
set lacp enable
set lacp static lag.0.1 g.1.13
set lacp static lag.0.1 g.2.21
set lacp static lag.0.1 g.2.22
(the documentation appears to let you list all the ports on one line, but I could not get it to work)
Make sure you plug the right cables in and have the VLAN set correctly (untagged on the switches since we don’t have VLAN configured on the filer.
Voila, 2GB/s iSCSI, 3GB/s CIFS and full redundancy.
Don’t forget to enable multipathing iSCSI on your vSphere 5 or vSphere 4 hosts.
2 Responses to NetApp FAS3240 configuration with HP and Enterasys switches