Finance houses get clever with storage

The amount of data being generated by enterprises is estimated typically to double every two or three years. IT departments are adding terabytes of capacity to their enterprises in response but throwing tape and disk at the problem is no longer enough.

The amount of data being generated by enterprises is estimated typically to double every two or three years. IT departments are adding terabytes of capacity to their enterprises in response but throwing tape and disk at the problem is no longer enough.

IT managers are having to become more strategic about managing their storage costs, particularly in the face of a slowing global economy.

Savvy IT departments are looking to consolidate and automate storage, and it’s no longer the hardware but the management software, the networking and the I/O connections that are driving storage technology today.

Banks are good examples of organisations that usually understand storage and are up to play with the latest developments. Most still have IBM System 390s or other mainframes running their transaction processing; from this environment they have an understanding of how SANs — storage area networks — work because IBM’s mainframe storage technology ESCON (enterprise system connection) was to all intents and purposes a SAN without the fibre channel network link. (IBM is surely aware of this: it is bringing out a fibre channel version of ESCON called FiCON — fibre connection — to integrate mainframes with SANs.)

A SAN is a high-speed sub-network of shared storage devices using either gigabit ethernet or fibre channel. A SAN’s architecture makes all storage devices available to all servers on a LAN or WAN.

Because stored data doesn’t reside directly on a single server as with a RAID system or tape backup, server power is utilised for business applications and network capacity is released to the end user.

As banks extend their range of applications beyond transaction processing to include data warehousing and CRM (customer relationship management) systems, they are considering technology such as SANs and network-attached storage (NAS) which utilises a network device dedicated to storage, to maximise their use of disk space, simplify administration and reduce costs.

Riding the tiger

First Union, the United States’ sixth-largest bank, saw its storage needs skyrocket because of increased internal use of data warehousing tools and a move into e-commerce that added nearly 400 servers.

First Union now stores 120TB of data in its Florida data centres in Charlotte and Jacksonville, and a planned custom-built cheque imaging system will add to the total later this year.

“It’s been at least a [200%] growth, from a storage perspective, every year,” says First Union IT executive Gary Fox.

First Union considered using SANs that offload storage systems to dedicated high-speed networks controlled by fibre channel switches. Instead it went with network-attached storage, putting cross-platform storage devices directly on the company’s production network.

Fox maintains a large system: approximately 1000 NT and 1000 Unix servers attached to 67 storage systems supplied by EMC. The Control Centre software that comes with the EMC hardware is Fox’s primary management tool, and he says the amount of time and effort required to support the Unix server storage has decreased. But management across all enterprise storage systems is still lacking.

“We will use this approach until we find a tool that will globally manage it,” says Fox, who is reviewing products from Massachusetts-based HighGround Systems, Veritas and Tivoli as alternatives.

People’s Bank in Connecticut migrated its storage to EMC drive arrays and switches and Tivoli’s TSM software, retiring legacy storage systems from California-based Storage Dimensions and Sun Microsystems, as well as mainframe storage from IBM and Unisys.

“Primarily, we did it for the flexibility — the ability to expand capacity quickly,” says Lena Zoghbi, the bank’s vice-president of enterprise systems management.

Enterprise server system executive Raju Palnitkar says he saw the move to one homogeneous system as the best choice to get a handle on storage management. “We did not have to have IBM tools and Unisys tools and Sun tools to manage each of the different storage types,” he says.

Most of the bank’s approximately 500 servers run NT and Solaris, with a few IBM AIX and OS/2 servers and AT&T Unix-based servers. Each night, Zoghbi backs up 600GB of data on the servers and a mainframe to EMC Symmetrix 3430, 3830 and 5500 RAID arrays.

Six EMC Connectrix switches provide the interoperability among the bank’s servers, says Palnitkar. In addition to the TSM software’s backup and monitoring features, EMC’s PowerPath load-balancing/fail-over software and Symmetrix Manager monitoring software serve as the bank’s primary management tools. Several people spend the equivalent of two full-time jobs managing storage.

“I think the combination of these [storage management tools] is sufficient for what we need,” Palnitkar says. “But the EMC tools don’t give us all of the flexibility we need. We still need to get EMC involved in some of the changes.”

Faster, wider

ASB Bank, which uses Hitachi Data Systems storage servers, is considering SAN technology to meet storage needs and provide access to data across many channels including online banking.

Like the vast majority of technology executives, ASB Bank IT head Clayton Wake-field doesn’t get terribly worked up over storage hardware, seeing it as something which is part of the yearly upgrade cycle. “Disk is the same as CPUs, servers and the like — they’re just a commodity.” In addition, he says, online banking is just another channel, and while it’s responsible for some growth in the bank’s storage needs it’s not driving huge capacity demands.

However, internet-based services are causing the bank to think about how to provide access to data across many channels and the storage issues associated with this.

Wakefield says ASB Bank is considering SAN technology, appropriate access speeds and how to send storage data over wide area networks, particularly between data centres.

Graham Penn, storage specialist at IT market researcher IDC in Australia, says storage using internet protocol (IP) will provide the ability to have remote wide area connectivity for large chunks of data, which will improve high availability and business continuity and allow organisations to further centralise their operations.

This means companies are talking not just to the traditional storage vendors but to networking companies such as Cisco, he says. “The data centre fibre channel will be around forever and a day, but in the distributed environment [highly applicable in New Zealand] storage over IP will be a significant option for some people.”

Draft specifications for storage over IP, which are before the Internet Engineering Task Force, are surrounded by a series of other specifications for management, encapsulation of data, remote booting and variations for configuring IP storage networks.

Standards are expected in the first quarter of next year but pre-standard products using iSCSI and fibre channel-over-IP have started to appear. IBM has introduced an IP storage array it calls the TotalStorage IP Storage 200i, which uses iSCSI; and Cisco is shipping fibre channel-over-IP or iSCSI storage switches and routers. EMC, Cisco, Nortel and Lucent have introduced fibre channel-over-IP products that tunnel fibre channel in IP for transport over dense wave division multiplexing (DWDM).

These vendors say the software for their iSCSI products will be upgradeable when the standard arrives, thus preserving customers’ investments. Fibre channel-over-IP vendors admit that some equipment changes may be necessary because of the complexity of tunnelling fibre channel in IP.

“By Easter all the major players will have storage over IP options available in their product portfolio,” says Penn. “Providing you’re working with a vendor who has a reasonable roadmap you shouldn’t encounter too many difficulties. Don’t mix and match at this point. Work with a major supplier who guarantees the capability and by the time you get to do the final implementations you’ll be okay. But don’t try and do it yourself because it is exceedingly complex.”

Unlike many New Zealand banks, ASB Bank doesn’t outsource its IT operations.

The Holy Grail

Hitachi gear is used by WestpacTrust and National Bank, as well as the ASB Bank. Hitachi Data Systems country manager Roger Cockayne says the banking industry was one of the first to pick up SANs.

“In the past the storage strategy was based around the computer itself but today CPUs are a commodity,” says Cockayne. “The storage is becoming the centre of the universe and the servers are starting to gather around it. We’re starting to see server consolidation again and the first thing to do is to get the data into one place.”

Cockayne says imaging (such as cheque imaging) and the digitisation of information (for example, security video tapes) are driving storage growth and banks are also collecting a lot more information regarding customer demographics. Although storage hardware is becoming faster and cheaper to buy, management is becoming the big issue, he says.

By consolidating storage, SANs should make management easier and developments in virtualisation are also promising to ease management issues.

Storage virtualisation offers a means of addressing storage functionally rather than physically. Storage virtualisation extracts the physical process of storing data through software — and sometimes hardware — layers that map data from the logical storage space required by applications to the actual physical storage space.

Virtualisation means the computing system sees all storage disks, regardless of location, as one resource rather than scattered throughout the organisation.

“Start to introduce virtual storage as much as you can so you begin to see one big system,” says Penn. “It spreads data across disk as though it is one big disk. You manage your storage as one resource across an IT infrastructure.

“You can do it in software, which is Veritas’ approach; you can do it at the storage device, which is the EMC approach; or at the network on a separate appliance attached to the network such as the VersaSTOR by Compaq which IBM has licensed. All the companies are trying to get their solution accepted as the generic industry standard.”

Penn says being able to manage devices from more than one vendor is the Holy Grail, but it’s almost impossible at the moment.

For banks the ability to move stored information safely and swiftly between data centres is also vital and the technology to link storage facilities is only starting to solidify.

Two types of products are starting to appear, say analysts: those based on storage over IP (known as iSCSI because it uses the SCSI protocol across the IP network rather than hardwired between the server and the storage device) protocols that let storage data run over existing ethernet networks; and the fibre channel-over-IP protocol for bridging geographically separated storage area networks (SANs).

Sidebar

Storage management software market may heat up

Join the newsletter!

Error: Please check your email address.

Tags storage

More about ASB BankCiscoCompaqEMC CorporationFirst UnionHighGround SystemsHitachi AustraliaHitachi DataHitachi VantaraHitachi VantaraIBM AustraliaIDC AustraliaInternet Engineering Task ForceLANLucentNASNortelSun MicrosystemsSymmetrixTivoliUnisys AustraliaVeritasWakefield

Show Comments

Market Place

[]