Off-the-shelf clusters from Dell

SAN FRANCISCO (09/26/2003) - Huge data sets, heavy-duty algorithms: That's the foundation of HPC. Forget about serving up Web pages or sharing files; HPC applications crunch numbers, lots of numbers, typically based on the contents of terabyte databases. They look through earthquake data to discover new fault lines. They simulate molecular behavior in order to test new drugs. They model the airflow over an aircraft wing or automobile body to determine fuel efficiency.

For decades, such serious number-crunching was the domain of monolithic supercomputers, such as the famous Cray Inc. machines or huge IBM Corp. mainframes. Now, more often than not, those tasks are parallelized and distributed among a cluster of commodity servers, tied together with special middleware. So-called Beowulf clusters of 8, 32, 128, even thousands of servers are now deployed within academia and industry, happily solving super-complex problems in petrochemistry, bioinformatics, finance, manufacturing, and other pursuits.

Assembled from off-the-shelf components and commodity hardware, HPC clusters are less expensive and more scalable than monolithic high-end computers, and they are also more familiar to developers, administrators, and users. Through its integration efforts, Dell Inc. has broken down one of the barriers standing in the way of such HPC clusters: complexity. Dell has made HPC clustering nearly as straightforward as deploying a commercial application.

This is a new market space for Dell, and a far cry from the company's traditional high-volume consumer space. But the company is apparently serious in this low-volume effort to build a direct-sales supercomputer industry, having assembled a team of a dozen-odd soft-spoken Ph.D.s in a specialized HPC lab.

Where Dell really distinguishes itself is in the integration of the server cluster package. The procedure of bringing up the cluster is straightforward: Install the operating system, middleware, and tools onto the two management servers, then use the Felix utility to push the software and configuration files to the cluster nodes.

The ease of use masks the real effort Dell made to ensure that the hardware, networking infrastructure, middleware, and utilities work well together. Linux and Ganglia are written by many different people via open source project teams. The Myrinet and MPI tools and drivers are commercial products, but must be tested and integrated with the specific version of Linux and qualified to run on Dell's hardware.

As anyone involved with HPC clusters can attest, getting everything to work together is no small task, and its challenges are compounded each time a tool is updated. Not only has Dell already done the integration work, it has packaged all the compatible pieces and it provides support for them as a package. As HPC clusters move out of the academic research lab (with its source of free student labor) into the commercial sector, Dell's integration and support services become essential.

Join the newsletter!

Error: Please check your email address.

More about BioinformaticsCrayIBM Australia

Show Comments
[]