FRAMINGHAM (10/23/2003) - The blame for software bugs belongs nearly everywhere. It belongs to vendors that rush products to market without adequately testing them, a sin also shared by corporate in-house development teams. It belongs to a legal system that has given software developers a free pass on bug-related damages. And it belongs to university computer science programs that stress development over testing.
Blame, however, doesn't belong with Sarfraz Khurshid, an MIT researcher who's at the forefront of developing automated testing processes for software. Testing software involves generating "inputs" -- instructions for software to follow. But there are as many ways to test software as there are different snowflake patterns. For every possible way software can break, there must be a test that can detect that.
"There are an infinite number of inputs," says Khurshid, but once you know how to automatically generate them, "which ones do you generate?"
Khurshid has developed algorithms for generating inputs, and he expects the automated testing algorithms to improve over the next few years. But experts say improvements in automated testing technology alone won't necessarily lead to better software quality.
"Most software products have not been designed for quality," says Herb Krasner, who heads the Software Quality Institute at the University of Texas at Austin. "In the commercial world, quality software is not the primary motivation -- the primary motivation is to get to market as fast as possible. It's the race to market that compromises quality in many cases."
Inadequate software testing is blamed for US$60 billion in annual costs incurred by users and vendors, according to a study conducted last year by the Commerce Department's National Institute of Standards and Technology in Gaithersburg, Md. Viruses that exploit defects can cause billions of dollars in additional damages.
Today, there are no standards for measuring software quality, broadly defined as functionality, reliability, usability, efficiency, maintainability and portability. But there's a long-term effort under way to develop standards for comparing the quality of products, something that's impossible today.
This effort was prompted, in part, by one of the most famous software bugs of all: The one that caused the 1999 crash of the Mars Polar Lander on Mars. It was a watershed event for NASA and led to a decision by the space agency to expand its work with universities on quality issues. It even signed a lease allowing Carnegie Mellon University to establish a West Coast campus at Moffett Field, Calif.
NASA also helped to spearhead the creation of the Carnegie Mellon-based Sustainable Computing Consortium (SCC), an effort that includes companies such as FedEx Corp., Pfizer Inc., Microsoft Corp. and Oracle Corp.
Engineers know how to test the quality of many complex products, such as planes, drugs and bridges. But standards for measuring software quality and comparing similar software products don't exist.
What does it mean, for instance, when Microsoft says it wants to produce "trustworthy" products or Oracle sets "unbreakable" as its standard?
"How do you compare something that is trustworthy against something that is unbreakable?" asks William Guttman, director of the SCC at Pittsburgh-based Carnegie Mellon.
The SCC is seeking to support the creation of standards and specifications that will make it possible to measure a system's attributes, such as dependability and security. If these attributes can be measured in a software product, users will be able to make purchases based on quality, says Guttman, who compares the task ahead for his group to developing a municipal building code from scratch.
Blame the Lawyers
But some argue that future improvements in software quality are directly tied to legal and regulatory issues.
Cem Kaner, a computer science professor at the Florida Institute of Technology in Melbourne, who examined software's legal protections in the book Bad Software (John Wiley & Sons, 1998), compares the legal liability protections enjoyed by vendors to what automakers had before the arrival of "lemon laws" that guarantee consumers certain rights.
Vendors typically limit liability to the cost of the software. Damages suffered by a user as a result of downtime, such as lost sales, aren't covered.
The liability issue is becoming particularly pressing because of the damages wreaked by software viruses, which often exploit software defects. As these attacks become more sophisticated, they have the potential to expose customer financial records, disrupt businesses and "leave a wake of enormous destruction behind them," says Kaner.
Building software with as few bugs as possible is Dale Campbell's passion. As the senior director of IT at Warner Music Group, he sees a tremendous opportunity for lowering costs by improving software development.
Campbell is using tools, such as Monrovia, Calif.-based Parasoft Corp.'s error-detection software, to manage techniques to improve his software. He says his development processes are improving, as are the tools he's buying.
"Software defects really equate to rework, and rework equates to wasting money," says Campbell, a software engineer by training who manages application development at the New York-based entertainment company. "I don't think we would accept the same level of defects in the physical products we bought."