Taming the storage storm

Several leading IT companies, including Microsoft and eBay, on Tuesday joined forces with leading financial service providers such as Visa International to form a coalition to fight online identity theft
  • Louis Chua (Unknown Publication)
  • 03 September, 2003 22:00

We have heard it every day from analysts and systems administrators everywhere: storage needs are exploding. End users also find that they are handling more and more data as well as more and more emails, with their systems administrators heckling and cajoling them to delete as many as possible.

However, according to IDC, the disk storage systems revenue in Asia Pacific excluding Japan (APEJ) was $US490.5 million in the first quarter of 2003. This was 11.2% below the revenue of $552.3 million in the corresponding period of 2002, and 20.8% below the revenue of $619.1 million in Q4 2002.

"To some extent, the lower value of disk storage shipments also reflects higher capacity utilisation with networked storage than with direct attached storage architectures. Customers continue to show increased interest in networked storage due to the scalability of storage area networks and network attached storage," said Graham Penn, director, Storage Research, IDC Asia-Pacific. "Networking of an organization's storage resource is proving to be an effective requirement for organisations seeking to better manage the ever increasing amounts of data that are required for today's business operations."

The truth is that while storage needs are increasing, the cost per volume of storage is decreasing rapidly and users are also squeezing as much capacity utilisation as possible out of the same boxes.

According to Gilbert Low, country manager of McData South Asia region, the company's end user research shows that there are three most pressing storage needs for end users. Firstly, end users want to know how they can efficiently deploy new applications and their new storage. Secondly, they want to know how they can scale storage for current applications. Thirdly, is how they can increase the security in their storage networks.

For the top storage needs — efficiently deploying storage and scale storage — the ability to interoperate between storage products from different vendors is one of the first steps to fulfil these two needs. This is because most end users will have a heterogeneous network with storage from various vendors.

EMC research shows that ensuring interoperable networked storage environments accounts for up to 15% of today's IT expenditures, which translates to nearly $US100 billion in IT spending, said Ravi Rajendran, country manager, EMC Singapore.

According to McData's Low, end users will save between 40 to 60% on their storage investment by moving from DAS (direct attached storage) to SAN (storage area network). "Interoperability has been the concern in SANs," said Low.

"But now McData's customers have few interoperability problems with their McData SANs."

As for NAS (network attached storage), Low has this to say, "Behind every good NAS there is a SAN. For example, if you purchase a Net Appliance filer or an EMC Cellera, you will get a NAS head on top of a SAN that may be powered by McData."

While firms like McData maintain that they are able to interoperate with all other storage vendors, some analysts warned against having storage products from too many vendors. According to a research note from Gartner, "Magic Quadrant for SAN Fibre Channel Switches, 1H03", enterprises should recognise that cross-vendor switch-to-switch interoperability is going to be problematic. This is because although interoperability between switches works at a low level, but for more sophisticated use of switches will result in proprietary lock-ins.

"In the ideal world, everyone will work together with one another," said Tony Lim, consulting director,— BrightStor, CA Technology Services, Southeast Asia, Computer Associates International. "However, due to the lack of standardisation, vendors will have their own methods."

"For example, EMC and Hitachi will have their own way of setting up storage. This is where the difficulties come in," explained Lim. "If we wish to interoperate with 20 vendors, we will need to write 20 different times for the same type of connectivity."

In its research note, Gartner's advice to users of current products is that they should have a clear understanding of the upgrade paths to faster speeds and larger switches. Furthermore, it advises that when users expand their fabric switches, they should either stay with the same vendor or replace the entire fabric with products from another vendor. This will reduce the problems of interoperability between SAN channel switches from different vendors. It ends on a note that Brocade and McData will continue to fight for leadership in the SAN market with Cisco a possible challenger.

"The issue of interoperability is not a trivial matter," said EMC's Rajendran. "The scope of the interoperability problem extends beyond mere hardware connectivity. It encompasses switches, drivers, operating systems, and applications. While standards help create a foundation for interoperability, they typically cannot address the entire problem."

For end users, the interoperability problem is expressed in delayed implementation, unstable IT environments, and significant personnel time dedicated to testing and integration efforts, added Rajendran. The task of implementing a cost-effective, enterprise-class networked storage infrastructure has become an expensive, time-consuming problem for customers to solve on their own.

According to Rajendran, one of the most pressing issues for customers is the need to automate their storage management to manage greater amounts of information with less labour. This is because, he believed, that true interoperability assurance must encompass the entire scope of the storage infrastructure and beyond, including operating systems and application software, host bus adapters that reside within the application server, switches and routers that form the nucleus of the network.

"Addressing these issues from an end-user perspective proves tremendously inefficient," said Rajendran. "Because end users have no mechanism to share the results of their integration efforts, the same integration work winds up being replicated by their peers with every new installation."

With the cost to validate a single configuration easily reaching $750,000 by EMC's estimates, this constant duplication of time and effort becomes prohibitively expensive. Interoperability assurance must become a long-term, integral component of a vendor's development and support strategy. Only those vendors willing and able to make this significant investment will be able to deliver the real business value of networked storage to their customers.

For example, McData has an interoperability lab called the SIL — System Integration Lab. "In this lab, we have hardware from just about every server vendor and storage vendor as well as other switch vendors," explained Low. "From this, we create an interoperability matrix that shows what solutions have been tested and proven. McData has the largest SANs running today, and continually pushes the envelop so that we find problems and fix them before our customers do."

EMC does its interoperability assurance through its EMC's E-Lab Tested qualification process, which is a series of functionality and reliability, tests to ensure the interoperability of all components that make up a multi-vendor information storage network. In EMC's E-Labs, engineers replicate customer environments using 2,600 terabytes of storage across nearly five acres of floor space.

To combat the problem of incompatibility, end users are turning to software vendors such as Veritas and CA to reduce the need to manage storage devices based on specific platforms. A software-based system can mask out the complexity of platform-specific management solutions by allow businesses to leverage on one single storage management solution regardless of storage type.

If managed properly, this strategy has the potential to increase return on investment, reduce total cost on ownership and drive up administrative productivity. This also allows businesses to very easily decide on future storage platforms instead of being tied down to a particular brand of storage.

Complexity is masked from the application and virtualization takes place. One could have multiple storage arrays from different vendors, the application and users see the different storage arrays as one big logical storage box. Different RAID (redundant array of inexpensive disks) levels can be configured to allow for redundancy and different application requirements.

Instead of having to grow only within a specific box, businesses can be allowed to grow out of the box. Data can be virtually on any hardware, and all will be managed by a single storage management platform.

"Running a data centre with storage hardware from multiple vendors can be a challenge," said Alvin Ow, technical consulting manager, Asia South, Veritas Software. "You not only need to leverage existing technology investments, you also need to evolve a unified system that runs efficiently while minimising user errors. Making all the parts work together takes time and effort. The complication lies in trying to get multi-vendor resources to be shared and efficiently utilized."

"This is tricky business, made more so by the current lack of interoperability standards," added Ow. Managing multi-vendor storage is a key concern for both end users and storage vendors concerned with making all the parts work together. For IT managers, the lack of storage network interoperability standards means fewer options and more testing to make sure products from multiple vendors can work together.

For end users, it means balancing a suite of uncoordinated applications from multiple vendors. These applications frequently lack uniform functionality, security, and reliability. The interoperability challenge is being addressed by storage vendors in a number of ways, including reverse engineering, swapping APIs (application programming interfaces), and the development of industry standards.

By virtue of hardware independence, software solutions allow organisations to maximise current hardware resources, offering freedom of choice for future hardware purchases. A software approach is potentially able to provide highly interoperable, storage management solutions that work seamlessly across the entire infrastructure, helping businesses gain control of complexity.

For example, CA depends on an extra layer between hardware and software called the instrumentation technology to coordinate interoperability. It is like a middleware between hardware and software. If there is a new hardware or software that CA needs to support, it just need to make changes such as understanding certain APIs to the instrumentation layer.

Vendors like EMC and McData are strong proponents of open standards and industry collaboration and over the years. The standard in question is the host-based volume management model for the Storage Management Interface Specification (SMI-S), the industry standards effort formerly known as Bluefin. Most storage vendors are reworking their systems to provide SMI-S-enabled products.

Veritas Software, for one, helps to define a volume manager for the SMI-S version 1.1 based on technology from its volume management and virtualization solution, Veritas Volume Manager. Under the Storage Networking Industry Association (SNIA), SNIA members created a specification that will define the software management structure for host and operating system block-level virtualization.

"The SMI-S standard is quite a recent development," said Lim from CA.

"However, when such standards are arrived at, they usually apply to the lowest common denominator," he warned. "All vendors, however, will want to differentiate itself by having something on top of the lowest common denominator having something which its competitor does not have."

However, until standards are finalized, most users and vendors have found that the best way to provide end users with maximum functionality is to incorporate functionality via APIs, working with different platform partners. In short, this agreement gives end users the benefits they need today while industry standards continue to mature.

Almost all of the storage vendors have put in place a number of application programming interfaces — exchange agreements with other hardware and software vendors. API-sharing simplifies the management of other vendor hardware environments.

Some of the more recent example of partnership is when Veritas Software and Network Appliance announced that they are tightening their relationship to better integrate the two companies' product development, sales and marketing teams.

The partnership is expected to yield products that better integrate Veritas' software with Network Appliance's hardware in three specific areas: disk-based data protection, compliance and regulatory archiving, and storage management, according to Veritas' vice president of Business Development, Robert Soderbery.

Beginning immediately, Veritas and Network Appliance will also start training their sales and support teams to handle each other's products, the companies said.

"We have signed a new agreement to extend our partnership to focus on collaboration of joint sales, marketing, and support for our solutions," said Soderbery.

The expanded relationship is the latest step in a partnership between the two companies that began in October 2002, Soderbery said. "We have been working for a long time on integration," he said.

Network Appliance currently sells a variety of storage devices that are integrated with Veritas products, including its NetBackup and StorageCentral software, but the two companies are now expected to develop more tightly integrated products that include enhancements like special management features for Network Appliance platforms, according to The Yankee Group analyst Jamie Gruener.

In United States, where most IT vendors are based, legislation is also forcing storage vendors to work together.

In the wake of new regulations requiring better corporate record-keeping, three top tape library vendors have confirmed that they are working to combine inexpensive disk arrays with their libraries to bolster backup reliability and data restoration.

Advanced Digital Information (ADIC) and Spectra Logic are each developing products that would use serial ATA disk arrays physically and logically tied to tape libraries to consolidate storage management, speed backups, increase redundancy and guarantee the fast restoration of mission-critical data.

ADIC said its disk/tape library combination will be available within the year. Spectra Logic said its model will be available early in 2004.

Jonathan Otis, ADIC's senior vice president of Technology, said he sees RAID as adding reliability to his company's libraries because "you can lose a disk drive and the backup will continue, while with tape drives, if a drive goes down it will stop the process and you'll have to start it all over again on another drive."

Representatives at StorageTek would not say when its product will be available, but they did say the technology is part of an overall information life-cycle management initiative focused on storing data on varying forms of media.

The goal is to align cost, reliability and speed of recovery with the importance of the data.

"The next logical step for our partners and customers is doing tighter integration of components with not just disk to tape, but with networking and management tools," said Tom Balue, manager of Product Marketing for StorageTek's Automated Tape Solutions division.

Balue said one of the biggest advantages of a disk/tape library combination is that systems administrators can have a single console that allows them to back up different data sets to disk and tape without having to learn multiple backup applications. "What is the advantage of disk over tape? If you lose a tape, you're in trouble, but if you are using inexpensive disks in a RAID, the data is not lost," said Matt Starr, chief technology officer at Spectra Logic.

Starr was referring to an array's ability to rebuild data striped across multiple disks after a single drive fails.

Another advantage to combining disk arrays with tape libraries is that administrators could combine power sources and cooling systems, Starr said.

Rick Luttrall, director of Product Marketing for the Nearline Storage division of leading tape vendor Hewlett-Packard, said HP is considering physically combining disk and tape. But he emphasized that addressing a policy-driven information life-cycle management strategy that includes intelligent software is far more important.