Don’t forget to check your Ethernet switches for Jumbo vs Flow Control support and recommendations – and double check your config to make sure it matches.
Some switches support both, some prefer one over the other – or you might try both and see which performs better with your particular traffic.
My scenario last week: Performance is not as expected, the HP 2810switches don’t support flowcontrol and jumbo, ports are configured for jumbo.
I start wondering if I can get a performance increase (however slight) by switching to the paravirtual. I add an extra drive to the test VM and link it to a new paravirtual adapter… and see a major performance boost. Which is strange. I try it on another machine and see no difference. Interesting. Then I get to thinking about the iSCSI configuration and how the traffic might be running for those two hard drives and go back to the switches.
I upgrade the switch firmware and double-check the config… only to notice that the 3-port trunk/LAG (Link Aggregation Group) between the two switches had flowcontrol and jumbo enabled on the individual ports making up the LAG.
The storage array has multiple connections to the two switches, but prefers one switch. The vSphere5 hosts use Round Robin/iSCSI MPIO and bounce sessions across both switches – any session sent to the switch storage doesn’t prefer will traverse the LAG.
I turned off flowcontrol on the individual ports for the LAG between the switches:
Before: Read Write
Test Average: 066.917 MB/sec 058.739 MB/sec
Lowest Mark: 064.130 MB/sec 040.399 MB/sec
Highest Mark: 068.846 MB/sec 065.815 MB/sec
After: Read Write
Test Average: 088.196 MB/sec 085.350 MB/sec
Lowest Mark: 084.969 MB/sec 080.732 MB/sec
Highest Mark: 090.632 MB/sec 089.238 MB/sec
(Test results from Intech’s QuickBench – it’s a single threaded hard drive test. I ran the “Large” 2-10MB test with 10 passes.)