NASA eyes autonomic computing, but warily

WASHINGTON (10/28/2003) - NASA is approaching autonomic computing with great interest but sees large challenges and potential costs with the still-emerging technology, according to one of its top IT officials.

"I am extremely thrilled by the prospect of autonomic computing and I think it is, in many ways, a breakthrough technology," said Peter Hughes, assistant chief for technology at the IT division of NASA's Goddard Space Flight Center. "But I think there are going to be significant challenges."

Hughes spoke in Washington Tuesday at a forum on autonomic computing sponsored by the Woodrow Wilson International Center for Scholars.

Among those challenges is development of scalable systems that can handle cascading problems affecting multiple systems, said Hughes. Moreover, developing diagnostics that can deal with multiple systems will also be a major issue, he said.

"We've encountered huge challenges in validating and testing some of these technologies, and it ended up taking a lot more time and being a lot more costly than we ever imagined," said Hughes.

Autonomic computing builds upon existing technology, with the goal of developing management capabilities that can be applied to legacy systems. Vendors are already delivering bits and pieces of the autonomic approach with self-management and self-optimizing systems management tools. But systems that can manage an enterprise, leaving IT managers free to focus on high-level issues instead of mundane and thorny system configuration issues, could still be years away.

The autonomic approach was outlined in 2001 by IBM and is based on the belief that the increasing complexity of systems is too big a burden on businesses and governments. "Nobody can understand all the pieces and parts as they come together," said Alan Ganek, the IBM Corp. vice president leading that company's efforts in this area. Ganek also spoke at the forum.

This complexity is making the job of running a corporate data center a lot more difficult, said Ganek. Data center personnel spend increasing amounts of time fixing problems, and 40 percent of system outages are caused by operator error, he said.

But it was clear that the panelists, which included Gail Kaiser, director of the programming systems laboratory at Columbia University, felt that there is still much to learn -- like the real costs of autonomic approaches.

U.S. government agencies, for instance, have been moving from proprietary to commercial off-the-shelf systems to try to standardize and reduce their IT costs. But seemingly simpler solutions can bring new and difficult problems, a point Hughes alluded to when describing the difficulty NASA has had trying to synchronize an upgrade of its commercial systems.

"Often we displace some simple solution with more complex ones and are not looking at how much it will cost to maintain that system and keep it operating," said Hughes.

Software bugs are another issue. Hughes said one solution might be relying more on pretested components.

Kaiser said the concept of perpetual testing, where a system is continually tested even after it has been deployed, is related to the idea of self-managing or autonomic systems. "Software engineers have long recognized that you're never going to get out that last bug in the lab -- you have to eventually put something in the field," said Kaiser. "But you shouldn't stop testing it then, and you should figure on continuing to patch, repair it and reconfigure it."

Join the newsletter!

Error: Please check your email address.

More about IBM AustraliaNASAScalable Systems

Show Comments
[]