FRAMINGHAM (09/29/2003) - Vincent F. Scarafino
Title: Manager of numerically intensive computing, Ford Motor Co.
Observation: Japan recently grabbed the supercomputer lead from the U.S. with its Earth Simulator for climate modeling. Operating at 36 trillion operations per second, it's the fastest supercomputer in the world.
Prediction: Without a resumption in federal support for supercomputer research and development, the U.S. will fall behind in many areas of science and engineering. Several years ago, the federal government shifted its funding for high-performance computing from exotic architectures to clusters of commodity processors. The clusters are fine for some jobs, but not for the most demanding ones, says Ford supercomputer user Vincent F. Scarafino. He explained to Computerworld's Gary H. Anthes the potential consequences of the U.S. losing the supercomputer race to Japan.
Why worry about U.S. leadership in supercomputing? Why can't Ford just buy supercomputers from Japan if that country makes the best machines?
Advanced supercomputers enable breakthroughs in leading-edge science. Access to these leading-edge supercomputers has, through the years, provided Ford with a competitive advantage. If the U.S. loses leadership in this area, U.S. science and industry will lose early access to the fastest, most capable machines. The Japanese Earth Simulator has already shown this effect. Japanese interests are the primary ones being served. American scientists have limited access to the machine, but not at the same level as if it were an American resource available here.
The Earth Simulator is made up of NEC supercomputers that are a refinement over the last vector supercomputer we made here in the mid-1990s, the Cray T-90. Japanese auto companies are formidable competitors. We don't need to hand them yet another advantage.
What should the federal government do to boost U.S. supercomputing technology?
Fund high-end processor design and supporting system components. The goal would be ultrafast processors with memory and I/O systems well matched to the computational speeds.
The government used to do just that, sponsoring development of high-end supercomputer architectures like the Cray vector machines. But now it seems to favor huge clusters of commodity microprocessors. Yes, in the mid-1990s they said that microprocessors were getting faster and faster, and we just need to put a whole bunch of them together and we've got a supercomputer. Well, it doesn't work quite that way. Microprocessors are fast at computing, but in order to run real difficult problems, they have to have real fast access to memory and be able to do I/O quickly. And memory subsystems are extremely expensive.
If you look at the very large machines made up of off-the-shelf components, they get about 5 percent of their theoretical peak performance. But if you look at the Earth Simulator, you see numbers from the high 30s to mid-50s.
Are there some applications for which the commodity-based clusters of microprocessors are a good approach?
They provide extremely good price/performance for solving well-known problems whose computations can be evenly split among many independent processors.
What could Ford do with a supercomputer 1,000 times more powerful than it has today?
Predict occupant injury in accident scenarios. Improve durability analysis through full-vehicle-lifetime simulation. Explore greater variance in design parameters, helping balance competing design requirements while reducing design cycle time.
Can't Ford do those things today?
The occupant injury thing is an analysis to actually compute what kind of damage is done to human organs -- the brain or liver, for example. Today's analyses with test dummies are very crude. They find at a gross level whether that kind of crash is survivable. But (occupant injury analysis) takes much more computing power than is available now.
What else would you like to be able to do?
Try to understand how exotic materials would work, well enough to understand if they'd work in vehicles. These composite materials are very strong, but understanding how they would react in a failure mode is a difficult problem to solve with today's computers.
What will the next generation of supercomputers look like?
The next generation of supercomputers will most likely be similar to the last generation of supercomputers built in the early to mid-1990s. But they will be significantly faster and able to execute difficult algorithms at speeds much closer to theoretical peak rates than commodity-based machines are able to do.
Will there be any breakthroughs in software over the next five years?
There has been significant progress in the area of parallel processing during the last eight years. I would expect continued evolution. I am not aware of any specific areas that seem ripe for breakthroughs, but these things are difficult to predict. Software cannot substitute for raw processing speed.
Does the debate about supercomputer architectures for scientific computing have any relevance for commercial applications such as transaction-processing systems?
If you look back to when supercomputers first came out and there was a real push in high-end machines, what was learned always ended up later in commercial computing. But there is no more engine pulling that; there isn't this trickle-down effect any more.