If you're an IT executive, chances are you're already thinking about storage virtualisation. Nearly one-quarter of companies with at least 500 employees have deployed storage virtualisation products already, and another 55% plan to do so within two years, a recent Gartner survey found.
Storage virtualisation is an abstraction that gives servers and applications a view of storage that is different from that of actual physical storage, typically by aggregating multiple storage devices and allowing them to be managed in one administrative console.
The technology is emerging fast onto the enterprise scene for good reasons: In many cases, it can reduce the management burdens associated with storage, and it offers better models for datacentre migrations, backup and disaster recovery.
However, There are still common pitfalls that storage administrators should ponder, as well as questions they should ask before they roll out a storage-virtualisation project. Here's a look at some of the top issues.
1. Managing capacity
With storage virtualisation, allocating storage is easy — perhaps too easy.
"You have the ability to affect more systems in the whole forest if you do something," says Jonathan Smith, CEO of outsourcer ITonCommand , who cautions IT shops to pay close attention to both the storage and performance needs of each application. "You just didn't have that power before. Now all of a sudden you can do whatever you want."
Smith, who is using LeftHand Networks virtualisation on HP storage, says an IT pro might see a lot of empty space in a given storage volume and be tempted to fill it up. Overusing a resource, however, can decrease performance if the storage is allocated to a database or some other I/O-intensive application.
"Make sure you size it correctly and really understand how much horsepower [your applications need]," Smith says.
These concerns are especially true when it comes to thin provisioning, a component of virtualisation technology that lets an IT administrator present an application with more storage capacity than is physically allocated to it. This eliminates the problem of storage over-provisioning, in which storage capacity is pre-allocated to applications but never used.
With thin provisioning, more than 100% of storage capacity can be allocated to applications, but capacity remains available because it won't be consumed all at once.
You can play it safe by allocating small volumes that never exceed the physical storage, or allocate as much as you want to each application, then monitor your systems closely, says Themis Tokkaris, systems engineer at Truly Nolen Pest Control. It's best if you can find a happy balance between those two extremes.
"You have to monitor your pool so you don't run out of space, because that would really crash everything," Tokkaris says.
2. How server virtualisation fits in
A common question is whether it makes sense to virtualise storage if you're not also using server virtualisation. The short answer is yes — though it's true you won't get as much flexibility as IT shops that virtualise both servers and storage.
"If you virtualise both, then you have the maximum flexibility when deploying new applications," says Chris Saul, IBM's storage-virtualisation marketing manager.
Nevertheless, there are benefits to just virtualising storage.
Improved disaster recovery, availability and data migrations can all be gained without having virtual servers, says Augie Gonzalez, product marketing manager of storage virtualisation vendor DataCore Software. In addition, storage virtualisation by itself can provide thin provisioning, as well as the simplified management structure that comes with pooling storage devices and managing them from a central console.
On the flip side, virtualising servers without virtualising storage is problematic. It doesn't make sense to have multiple virtual servers on a physical machine that aren't able to share data, says Enterprise Strategy Group (ESG) analyst Mark Peters.
"You can gain tremendous benefits from storage virtualisation, even without server virtualisation. It's harder the other way around," Peters says.
2. Virtualisation in a heterogeneous environment
Given that virtualisation is designed to combine multiple storage devices, it's not immediately obvious why it makes sense to virtualise your storage if it all comes from a single vendor.
There are compelling reasons, however, says storage analyst Arun Taneja, of the Taneja Group. "A lot of people think storage virtualisation has a prerequisite of heterogeneity, that it only comes into play when storage from three companies is involved," he says. "I say, forget it, it has value even if you are stuck with a single vendor."
The storage market is more proprietary than just about any other IT space, and this creates problems even if you have just one storage vendor, Taneja says.
Say you're an EMC customer with two Symmetrix DMX boxes, and "you just want to combine the power of those two boxes and manage it as one," Taneja says. "[Without storage virtualisation] you can't do it. That's how ridiculous the world of storage is."
This "ridiculous" level of exclusivity in the storage market obviously takes on a new dimension when you're managing storage from multiple vendors. That leads to the next issue.
4. Choosing a vendor
Organisations' primary procurement dilemma is whether to purchase storage-virtualisation products from a storage vendor or a third party.
If your true objective is flexibility, especially if you're planning major data migrations, a third party is the way to go, Taneja says. Vendors such as FalconStor Software and DataCore are capable of managing storage from multiple vendors simultaneously, whether they are EMC, HP, IBM or Hitachi.
Truly Nolen chose a third party, DataCore, even though the company uses only HP storage. The company evaluated virtualisation vendors including HP, EMC, and Dell EqualLogic, but settled on DataCore because it was less expensive and offers the flexibility of using whichever hardware vendor it likes, Tokkaris says.
The major storage vendors promise to be able to manage a heterogeneous environment. Examples include IBM's SAN Volume Controller, NetApp's V-Series, and EMC's Invista. As a general rule, though, vendors support their own storage products first and others second, if at all.
"They always support their own systems first," Taneja says. "That means EMC's Invista supports DMXs and Clariions, and they might support some other foreign devices; but the support for foreign devices always lags, and support for foreign devices is always incomplete. The whole idea is don't support your enemies' boxes."
Peters predicts that as storage virtualisation becomes more common, market pressure will force vendors to do a better job supporting their rivals' technology.
If you get storage from just one vendor, however, the solution is simple.
"I say to the IT people I talk to, if you're a Hitachi Data Systems customer and you like working with them and you're stuck with them, just buy their virtualisation to make life more manageable within Hitachi product," Taneja says.
5. Sifting through the hype
By most accounts, storage virtualisation is a no-brainer. Who wouldn't want to manage multiple storage devices from a single console, and gain data mobility that makes disaster recovery a breeze?
Storage virtualisation will be about as common as automatic transmissions in cars within a couple of years, ESG's Peters says. "There are certain technologies that are just smarter and better than people doing it manually," he says.
Even storage virtualisation vendors, however, can admit there are instances when the technology isn't a fit.
Storage virtualisation is not for everyone, says Kyle Fitze, an HP director of storage marketing. Virtualisation actually adds a layer of complexity, he argues. You have to manage the individual storage devices, as well as the virtualisation layer, he says. Despite virtualisation, users still have to perform such tasks as reconfiguring devices after adding physical disks to storage arrays, he adds.
"There's a complexity/benefit tradeoff," Fitze says. "If [an organisation's] current environment is difficult to manage and complex . . . adding a virtualisation layer can simplify that complexity. If it's a small, efficiently managed environment without data-protection challenges, then virtualisation just for virtualisation's sake is probably not a good idea."