Almost every week, Computerworld uncovers a security flaw: another virus that exploits Microsoft Office, a vulnerability in Windows or Unix, a Java problem, a security hole. Why can’t vendors get this right, we wonder? When will it get better?
I don’t believe it ever will. Here’s why:
Security engineering is different from any other type of engineering. Most products, such as word processors, are useful for what they do. Security products, or security features within products, are useful precisely because of what they don’t allow to be done. Most engineering involves making things work, while security engineering involves making things not work — and then preventing those failures.
In many ways, this is similar to safety engineering. Safety is another engineering requirement that isn’t simply a “feature.” But safety engineering involves making sure things don’t fail in the presence of random faults: It’s about programming Murphy’s computer, if you will.
Security engineering involves making sure things don’t fail in the presence of an intelligent and malicious adversary who forces faults at precisely the worst time and in precisely the worst way. Security engineering involves programming Satan’s computer.
And Satan’s computer is hard to test. Virtually all software is developed using a try-and-fix methodology. Small pieces are implemented, tested, fixed and tested again. Several of these small pieces are combined into a larger module, and this module is tested, fixed and tested again. The end result is software that more or less functions as expected, although in complex systems, bugs always slip through.
This just doesn’t work for testing security. No amount of beta-testing can ever uncover a security flaw. Remember that security has nothing to do with functionality. If you have an encrypted phone, you can test it. You can make and receive calls. You can try, and fail, to eavesdrop. But you have no idea if the phone is secure or not.
The only reasonable way to “test” security is to perform security reviews. This is an expensive, time-consuming, manual process. It’s not enough to look at the security protocols and the encryption algorithms. A review must cover specification, design, implementation, source code, operations and so forth. And just as functional testing can’t prove the absence of bugs, a security review can’t show that the product is in fact secure.
It gets worse. A security review of Version 1.0 says little about the security of Version 1.1.
A security review of a software product in isolation doesn’t necessarily apply to the same product in an operational environment. And the more complex the system is, the harder a security evaluation becomes and the more security bugs there will be in the product.
Suppose a software product is developed without any functional testing at all. No alpha or beta testing. Write the code, compile it and ship. The odds of this program working at all — let alone being bug-free — are zero. As the complexity of the product increases, so will the number of bugs. Everyone knows testing is essential.
This is where we are in security. Products are being shipped without any, or with minimal, security testing. And the products are getting more complex every year: larger operating systems, more features, more interactions between different programs on the Internet.
Windows NT has been around for a few years, and security bugs are still being discovered. Expect many times more bugs in Windows 2000.
Expect the same thing to hold true for every other piece of software.
This won’t change. Computer usage, the Internet and convergence are all happening at an ever-increasing pace. Systems are getting more complex, and necessarily more insecure, faster than we can fix them — and faster than we can learn how to fix them.