vSphere5: VSA overhead

VMware has announced/shipped a software-based clustered storage solution for vSphere5 called Virtual Storage Appliance. It is intended for the SMB space for those who do not want the cost/complexity of a hardware storage array.

Couple of issues right off the bat – the price seems a little high: $7254 with 1 year of basis support. Now at first glance it might not seem that bad – a little less than the entry-level hardware from top-tier providers (I’m thinking MD3xxx from Dell, N3xxx from IBM, FAS2020 from NetApp – qnap, iomega, drobo are cheaper).

However, that $7200 doesn’t include any hardware.

Ok, what about the hardware? The HCL for VSA is very very limited – there are only four Dell servers (R510, R610, R710, T610) – and only one SCSI adapter (H700 w/512MB)!

For disks VMware recommends 8 disks of SCSI/SAS or SSD using RAID10. And since it is a cluster, each server needs to match.
Eight disks means a 2U server not a cheap pizza box. RAID10 means you loose 1/2 your raw storage right off the top.
And since VSA is a replicated cluster, only 1/2 of the storage can be used to store data (each node in a cluster creates 2 equal datastores, one of which is a replica from another server.)

The VSA documentation illustration

So, with all that hardware you bought only 25% (at best) of your storage can be used.

A more expensive server, more expensive disks and considerable overhead … all add up to a considerable cost! I will be doing performance #s soon for a 2-node 6-disk-each cluster with 15k drives vs an equallogic 16-disk SATA array include list prices and configuration difficulty.

Drop me a line if you have any suggestions for the test.

Note: Testing was delayed as the PERC6 controllers in my R610s needed to be upgraded to H700s and now I have a networking issue after the upgrade.

This entry was posted in Cloud, Computing, Storage, Virtualization, VMware and tagged , , . Bookmark the permalink.

2 Responses to vSphere5: VSA overhead

  1. >So, of all that hardware you bought you can actually use 25% of a 2-node cluster and 33% of a 3-node cluster.

    Is it not still 25% even if using a 3-node cluster? Each node “lose” 50% of RAID10 and then uses about half of that to create one writable volume each = 1/4 of the total diskspace per node. If three nodes uses 1/4 this will make up 3/12, which is 25%.

    (Or 6 writable disks out of 24, if using the supported minumum of 8 disks in each node..)

    • JAndrews says:

      You are correct, I will change the article. Thanks.

      Dang, something really trashed that article when it got posted, I didn’t realize the text duplicated 1/2 way and there were so many grammatical issues (I did not put Satan instead of SATA! Honest!)

Leave a Reply