Data storage is continually demanding more capacity, greater bandwidth and, as a partial consequence, is becoming increasingly complex to manage and protect.
According to a survey this year by Computerworld US stablemate InfoWorld, a healthy storage strategy should give attention to all these areas. Focusing on one, such as adding capacity without providing adequate data recovery or network bandwidth to support it, would create an imbalance and eventually diminish the benefits of the investment.
Tony Prigmore, a senior analyst at Enterprise Storage Group in Massachusetts, says five key elements will become part of any comprehensive enterprise storage network: storage resource management, storage network management, policy management, data management and virtualisation.
IT managers will continue to focus on:
- reducing database restore and backup time for business-critical applications
- opening up the bandwidth throttle
- developing better disaster recovery solutions (local and remote mirroring, local and remote snapshots) and
- improving the manageability and flexibility of their storage systems.
The move away from direct-attached storage continues. However, there will always be a portion of servers within the total networked storage market that use internal storage and do not consume a networked storage service. Phil Goodwin, programme director with IT research company Meta Group’s server infrastructure strategies service, says a storage infrastructure is necessary to support an integrated application portfolio.
Multiple types of storage services, and consequently different storage architectures, are required for different applications such as email and ERP and CRM within the enterprise IT portfolio.
Once it was thought that SAN (storage area networks) and NAS (network-attached storage) competed, but now the overarching trend is to connect storage through a mix of SAN and NAS. SAN storage is defined as servers accessing disk storage resources via channel-based networks that principally use the fibre channel protocol. NAS uses dedicated storage devices that sit on the corporate LAN and take the storage function off the application servers.
Meta Group advises organisations to consolidate their storage vendors to one or two platforms in order to simplify operations, reduce training and improve organisational agility. According to its analysts, fibre channel will continue to be the dominant SAN architecture in data centre deployments through till 2006/2007.
We started hearing about virtualisation back in 2000. In its purest form, virtualisation allows users to add storage capacity using inexpensive, commodity disk and tape drives and dynamically manage those storage resources as virtual storage pools with little regard for where the actual resources are.
For years vendors such as EMC, Network Appliance and others have offered storage virtualisation at the disk array or hardware level, whereas software companies such as Veritas have offered virtualisation at the host level. More recently, a number of vendors such as StorageApps, DataCore Software, Xiotech, FalconStor Software and StoreAge Networking Technologies have arrived offering storage virtualisation at the network level, many in the form of storage appliances. But opinions on the right way to virtualise at the network level have differed significantly among the newer players, which has frustrated end-users attracted to virtualisation for its simplicity.
Meta Group analyst Kevin McIsaac says virtualisation will be important but the hype is well ahead of the reality. It will be another 24 months before we see practical benefits from virtualisation. Meta suggests organisations focus on the benefits of the storage applications that will be enabled by virtualisation engines, rather than the virtualisation engines themselves. McIsaac says it’s difficult to pick the winners in this space but Meta doesn’t see the early virtualisation start-up companies surviving in this market. Instead it will be the larger storage companies such as HP, EMC, Veritas and IBM.
Ethernet is still by far the most popular networking protocol; but the 2003 Infoworld Storage Survey of 475 readers in the US suggest a future mass migration to gigabit ethernet; 59% (three times the amount which currently has GE) have GE plans for the year.
Readers showed a similar enthusiasm for fibre channel networks; 49% said they would roll out such a network in the next 12 months, as against 18% which currently has them. Some 28% planned to rollout or were considering deploying management software to simplify administration.
IBM’s fibre connect or ficon mainframe connectivity technology will continue to be the dominant interconnect in the data centre. This year ficon is upgrading from 1Gbit/s to 2Gbit/s.
In February, the IETF ratified iSCSI (internet small computer systems interface) as a standard. ISCSI can be used to transmit data over LANs, WANs and the internet thus allowing for storage across distances — especially useful for remote backup and disaster recovery. Eventually the protocol will allow network administrators to take hundreds or even thousands of small servers that have been locked into direct-attached storage and plug them into ethernet storage networks for backup and management.
Specialists say cost-benefit ratios range between 5:1 and 10:1 to using iSCSI over fibre channel storage area networks due to standardised technology and the fact that you can use the same skills to manage the storage network as common data networks.
According to some, iSCSI will take some time to mature into an enterprise-class storage technology but it will eventually find its way into large data centre.
IDC storage analyst Graham Penn says iSCSI won’t be taken up immediately but will be common in a few years. McIsaac says it will not emerge for at least another 18 months and when it does it will be a low-cost, departmental server interconnection that will complement ficon.
Penn expects iSCSI to start at the periphery and work its way into the core infrastructure. Analysts expect IBM, EMC, HP, Hitachi and Dell to start shipping iSCSI arrays and host bus adapters over the next several months.
According to Meta Group the enterprise SAN market is highly competitive and will undergo significant changes during the next 24 months. Hardware will continue to feature faster disk drives, more capacity and expanded data-copying and disaster recovery features.
At least a couple of years away is object-based storage. That’s a storage device that uses its own horsepower to manage data, requires no manual settings for security and doesn’t care if the server speaks in blocks or files.
Most innovation will occur in management software, whereas hardware is maturing. Vendors will attempt to deliver policy-based storage management that will be tightly integrated and initially homogenous. Meta Group doesn’t expect robust heterogenous management capabilities before 2005/2006.
Interoperability — Bluefin
Bluefin, a draft specification that the Storage Network Industry Association (SNIA), is expected to be finalised by the third quarter and is aimed at making it easier to manage multi-vendor storage area networks.
IBM is expected to unveil a Bluefin-compliant storage management interface for its Enterprise Storage Server Model 800, known informally as Shark.
Still several years away, this is an initiative to put some of the intelligence for accessing objects such as storage blocks into the storage array rather than the application server file system, meaning servers would no longer consume bandwidth searching for and accessing blocks of storage.
In one proposed model, intelligence would be added to the storage device in order to offload low-level storage management tasks.
The SNIA is working on a specification called object-based storage devices (OSD) which would turn files, directories and storage-related elements into objects that storage management software accesses using an extended SCSI-3 command set. However, the aim is not to limit OSD to SCSI but to have it running over fibre channel, TCP/IP or other.