50%-off storage: is it too good to be true?

There are many conditions to NetApp's offer, says Jim Damoulakis

Storage vendors of every stripe have been feverishly working to hitch their wagons to the server virtualisation juggernaut. Going far beyond the basic integration and certification activities that one would expect for support of a popular application, storage products are integrating management functionality and working to develop other ways to distinguish and differentiate their VMware support.

One of the boldest announcements came from NetApp earlier this month, purporting to guarantee that users will use 50% less storage in their virtual environments if they deploy with NetApp gear versus that from a competitor. Now, before getting too excited, it is necessary to point out that this guarantee comes with a list of conditions reminiscent of a TV prescription drug ad, and I suspect that competitors will have a field day with some of them.

Nonetheless, there are two important takeaways. NetApp is seriously stepping up its endorsement of thin provisioning as a key technology for virtualised environments, and, even more noteworthy, is their support for deduplication as a primary storage technology. Until now, deduplication has been targeted almost exclusively to secondary data, particularly backup, but the commonality of data across guest virtual machines (think C: drives) is such that there is a potential for substantial savings.

However, we are very much in the early stage of the storage battleground inside the virtualisation arena. VMware's recent announcement of their Virtual Data Centre Operation System (VDC-OS), and specifically its vStorage and related components promise new fields of fertile ground for storage vendors to plow. Far more significant that mere integration with Virtualcenter, this will open the floodgates to a new range of VMware-friendly capabilities. Specifically, the vStorage APIs will lead to tighter integration of storage-related functions including load-balancing and other IO performance enhancements, thin provisioning, snapshot, replication, as well as more comprehensive capacity and performance management.

Ultimately, for those tasked with architecting virtualised infrastructures this is good news, but it means that current storage design assumptions and common practices that have evolved over the past few years will need to be re-evaluated. It's a good time to step up long range planning and ensure a good understanding of vendor roadmaps in this area.

Jim Damoulakis is chief technology officer of GlassHouse Technologie, a provider of independent storage services. He can be reached at jimd@glasshouse.com

Join the newsletter!

Error: Please check your email address.

Tags managementstoragenetapp50%

Show Comments
[]