IBM throws a supercomputing fastball

Big Blue says it has reached a new speed and volume milestone. Shelley Solheim reports

IBM has announced that it has developed technology to speed up the way large computer networks access and share information.

Under a project code-named Fastball, IBM’s ASC Purple supercomputer has been able to achieve 102Gbit/s of sustained read-and-write performance to a single file — the equivalent of downloading 25,000 songs in a second over the internet, according to IBM.

IBM’s General Parallel File System (GPFS) software was used to manage the transfer of data between thousands of processors and disk storage devices. IBM says it had to enhance the software in several areas to handle such fast data rates.

For example, it employed new fencing techniques to prevent individual hardware failures from causing the overall system to fail, and added new capabilities to orchestrate flow control between all of the different hardware components in the system. “If they all go really fast at the same time you get a traffic jam and performance goes down,” says Chris Maher, director of high-performance computing development for IBM’s Systems and Technology Group.

ASC Purple, the world’s third most powerful supercomputer according to the supercomputer Top 500 list, is housed at the Lawrence Livermore National Laboratory (LLNL). The Fastball project capabilities were demonstrated at LLNL and IBM supplied the computer to the US Department of Energy and LLNL for use in nuclear weapons research.

The Fastball project combined IBM servers, a high-performance computing switch network and storage subsystems tied together through the enhanced version of the GPFS software. IBM used 416 individual storage controllers combined with 104 Power-based eServer p575 nodes.

In the Fastball demonstration, 1,000 clients requested a single file at the same time. Through virtualisation techniques, the software then spread that file across hundreds of disk drives. The resulting file system was 1.6 petabytes in size.

Researchers for the project say this kind of computing could be applied to different applications.

“You can imagine the kind of problems you can solve with this, like a tsunami warning device that would scrutinise huge amounts of information from the ocean and then analyse that quite quickly,” Maher says.

Other applications include medical research and online gaming.

A future area of focus is developing ways to match storage resources to data automatically as data is generated.

Join the newsletter!

Error: Please check your email address.

Tags technologyIBMsupercomputing

More about IBM AustraliaTechnology

Show Comments
[]