FRAMINGHAM (09/23/2003) - Remember when cutting storage costs and administrative headaches simply meant implementing a storage-area network? Those days are gone. Now storage heavyweights Brocade Communications Systems Inc. and Cisco Systems Inc. are pushing a new panacea - moving storage intelligence off storage area network (SAN) servers and storage devices and into the SAN fabric itself.
Although more intelligent SAN switches and appliances promise easier management and better scalability, the trick for network executives will be to make the move cost-effective, users and analysts say.
"Not everyone will be able to take advantage of this move to put more storage intelligence in the SAN fabric," says Tom Barclay, lead developer and program manager for the TerraServer project, which provides interactive U.S. Geological Survey aerial maps to users via the Web. "It will make sense only for the biggest, most complex SANs."
Vendors agree. "You won't think about moving intelligence into the fabric until you have a very large SAN in place, with a very large amount of servers, disk drives and arrays," says Tom Buiocchi, marketing vice president at Brocade, which comes by its intelligent storage switch technology via the November 2002 acquisition of start-up Rhapsody Networks. "At that point, it becomes crucial."
The key is making sure that not only are you saving management costs, but that you move the right applications to the fabric and that you can recoup your investment in other intelligent devices within the SAN.
How it works
In traditional SANs, the intelligence necessary to perform key storage applications such as virtualization, snapshot copying, data replication and disk mirroring are performed primarily at the host, or server, level.
This means that to implement virtualization, in which servers view all enterprise storage devices as one large pool of storage, the virtualization software must run on every server.
"Say I have 100 servers across my company, and 25 are from Dell running NT, 25 are from Sun, 25 are from IBM and 25 are from HP (Hewlett-Packard Co.)," Buiocchi says. "If I want to do virtualization today, typically I buy software and load it on all my servers." That's 100 licenses, with some for NT, some for Solaris and so on, he says.
The result is 100 management touch points for the storage administrator, who needs to track and maintain what the servers are running. But what if you could move that same piece of software off those 100 servers and on to one or two SAN switches?
"It's essentially the same software customized to run on a switch as opposed to a server," he says. "And now you've reduced that management toll to one or two touch points."
You've also opened up the storage to gain a many-to-many relationship, analysts say. "Usually, when storage is tied to the server or host, SAN users with access to that host can reach only the one storage device attached there," says David Hill, vice president of storage research at Aberdeen Group. "But when you move that functionality out to the switch, now many users have access to many hosts and storage devices. It scales up the storage environment."
Make the move judicious
While the virtualization scenario above seems to make sense, other types of storage intelligence should never move to the fabric. "Server failover and multipathing will always run on the server. That's software that says, 'Hey, I tried to write some data to this switch but that switch is dead. I need to write it across another switch so it gets down to the storage.' That software has to run on the server," Buiocchi says. "The same goes for disk caching and RAID functionality. That needs to run very close to the disk drives in the array, so that's going to stay there."
It also depends on the granularity of what you're managing, Hill says. "If I have a tape library, and I want to capture information about its health and how that particular unit is performing, that's separate from the management of the actual data that goes onto it. It's very local to the particular device, tape drive, library or disk drive. That's the stuff that won't move to the network."
Dave Uvelli, tape library program manager at Advanced Digital Information Corp., a leading tape library vendor, agrees. "There are lots of things we do at the tape library level that could never be moved to the switch. We can tell you if a tape drive is full or not functioning properly. There's no way a switch vendor can possibly manage that complexity from within the network," he says.
Hill says moving intelligence to the network is sensible only for "big picture" items. "Where data needs to be moved across multiple storage platforms or across many heterogeneous servers, it makes sense to centralize that function in the middle of the network so you're isolated from the idiosyncrasies of 18 different servers and 41 different disks," he says.
Switch or appliance?
Once you decide to move intelligence to the SAN fabric, you need to make sure the implementation you choose is tuned for the highest performance and scalability, vendors say. Cisco, with its MDS 9000 Series switches (based on technology gained through its purchase last summer of start-up Andiamo Systems), and Brocade, with its SilkWorm Fabric Application Platform, promise the switches - scheduled to debut by year-end for as-yet-undetermined prices - are optimized to run intelligent applications through newly created ASICs and firmware.
Using a storage appliance (such as those from DataCore Software Corp., FalconStor Software and HP) to gain the same flexibility and network-based intelligence adds too great a performance hit, Buiocchi says. "It's basically adding software to a PC, but all of the PCI buses and internals of a PC are not optimized for storage applications, and you end up introducing latencies," he says.
Genevieve Sullivan, marketing manager within HP's storage software division, disagrees. "It's not an either/or situation when it comes to appliances or switches," she says. While HP offers its Continuous Access Storage Appliance for adding virtualization and other intelligent capabilities to the storage fabric, it is working to move the CASA functionality to Brocade's switch.
"There's a place for both," she says. "It's up to customers to decide what they need in terms of functionality and other factors."
For example, if three appliances cost less than one intelligent switch that can handle the functionality of those three appliances, maybe it makes sense to keep the appliances, especially if the administrative burden and the performance need is not too great. "But each organization is different," she says.
Aberdeen's Hill says the appliance/switch debate is pointless. "Either way will work, and you just have to gauge your environment," he says. "Even if there's a little inefficiency with an appliance, it may not matter because before you reach its physical limits, something else in the SAN may become the bottleneck. It may work fast enough for all you need to do and for the applications you have."
Look at partnerships
Users would be wiser to pay attention to how the intelligence is implemented in the fabric.
"The relationships with the applications and disk vendors are what would sell it to us," says Michelle Butler, technical program manager in charge of storage at the National Center for Supercomputing Applications on the University of Illinois campus in Champaign-Urbana. Butler currently oversees three SAN fabrics, the largest of which uses four Brocade SilkWorm 12000 Core Fabric Switches, 937 Itanium 2-based servers and a multi-vendor hodgepodge of disks and arrays, all supporting more than 230 terabytes of storage.
"Moving some intelligence into the switch fabric and being able to host out some of this data without a smart server on the end would be fantastic," Butler says. "You wouldn't have to spend so much money on hardware and setup and resources. But I'd like to see what it actually works with. If it doesn't support my current vendors, it's not really worth the move."
Cisco and Brocade each have announced partnerships with EMC and Veritas Software, while Brocade also is working with HP's CASA group. And both vendors say they also intend to increase the number of partners.
"But EMC and Veritas are the most expensive software and the most expensive hardware out there," Butler says. "If it could work with what I have - Serial ATA, LSI disks and Data Direct disks - that would be the deciding factor."
TerraServer's Barclay says functionality and investment need to be played off one another. "You have to fight the urge to get carried away with the technology and make sure that this is really something that makes sense in your environment," he says. "Especially if everything is working today, I'm leery of moving more intelligence into the network because there are a lot of places to blow it, and it's not like you can go back easily. Once it's there, you're stuck. And if it doesn't work, you're touching it every day."
Because the fabric is so critical to his storage network, Barclay says he'd be sure that whatever he did in the switch could fail over easily to a more traditional server-based solution. "So I'd have it in both places, but then, what's the point? You're not really saving anything," he says.
Butler says she's taking a wait-and-see stance. "Right now, we're looking at this as a nice-to-have, but something that's not that critical," she says. "But in a couple of years, as our fabrics grow and change, it will become a must-have. But by then, it should be clearer if and how it will fit with us."
Cummings is a freelance wirter in North Andover, Massachusetts. She can be reached at jocum email@example.com.