VMware has announced/shipped a software-based clustered storage solution for vSphere5 called Virtual Storage Appliance. It is intended for the SMB space for those who do not want the cost/complexity of a hardware storage array.
Couple of issues right off the bat – the price seems a little high: $7254 with 1 year of basis support. Now at first glance it might not seem that bad – a little less than the entry-level hardware from top-tier providers (I’m thinking MD3xxx from Dell, N3xxx from IBM, FAS2020 from NetApp – qnap, iomega, drobo are cheaper).
However, that $7200 doesn’t include any hardware.
Ok, what about the hardware? The HCL for VSA is very very limited – there are only four Dell servers (R510, R610, R710, T610) – and only one SCSI adapter (H700 w/512MB)!
For disks VMware recommends 8 disks of SCSI/SAS or SSD using RAID10. And since it is a cluster, each server needs to match.
Eight disks means a 2U server not a cheap pizza box. RAID10 means you loose 1/2 your raw storage right off the top.
And since VSA is a replicated cluster, only 1/2 of the storage can be used to store data (each node in a cluster creates 2 equal datastores, one of which is a replica from another server.)
The VSA documentation illustration
So, with all that hardware you bought only 25% (at best) of your storage can be used.
A more expensive server, more expensive disks and considerable overhead … all add up to a considerable cost! I will be doing performance #s soon for a 2-node 6-disk-each cluster with 15k drives vs an equallogic 16-disk SATA array include list prices and configuration difficulty.
Drop me a line if you have any suggestions for the test.
Note: Testing was delayed as the PERC6 controllers in my R610s needed to be upgraded to H700s and now I have a networking issue after the upgrade.
2 Responses to vSphere5: VSA overhead