Received wisdom would have it that transparency makes systems more secure by allowing anyone to view the underlying software code, identify bugs and make peer-reviewed changes. Computer security and cryptography expert Bruce Schneier certainly adheres to that theory. He’s been saying engineers should “demand open source code for anything related to security” since 1999. But not all security experts agree.
John Viega, co-author of Secure Programming Cookbook for C and C++, suggests the notion is merely a kind of “cultural elitism” stemming from the open source community’s belief that it’s better at writing code than anyone else. And Fred Schneider of the Department of Computer Science at Cornell University in New York writes: “There is no reason to believe that the many eyes inspecting [open] source code would be successful in identifying bugs that allow system security to be compromised.”
The controversy surrounding the security and integrity of electronic voting machines in the 2004 US elections first brought the argument about open source security to the public eye. It was hoped that the introduction of Diebold’s electronic touch-screens would simplify voting and remove the potential for fraud. But the lack of a paper trail made the machines contentious, amid complaints that there was no way to tell if they had been rigged, were insecure or faulty. Commentators claimed Diebold Systems’ proprietary GEMS voting software was hopelessly compromised.
Memos leaked to voting activists and journalists by a hacker who broke into an insecure Diebold server contained information about the manufacturer’s internal workings. The memos revealed that the Microsoft Access database used by the Diebold system to count votes was not password-protected, and votes could be altered by anyone entering the database through an insecure backdoor, the machine itself or even the phone system.
In its October 2001 election, the Australian Capital Territory used open source software for its voting machines to prevent similar backdoors and security loopholes, and now the Australian Electoral Commission (AEC) is hoping to move completely to open source, once it has completed a difficult migration from legacy, green screen technology.
Dave Lane, director of Egressive, a Christchurch-based open source developer, recognises that building a truly secure electronic voting system presents a significant challenge. “Reports I’ve read indicate that the Diebold systems in the US were unbelievably badly designed. The only way for such a system to succeed is for the public to trust it. That can only be achieved if its development is open and subject to extensive peer review, just as any important scientific theory or legal case must be.”
Lane notes that Linux is used for reasons of security by organisations such as NASA and the NSA in the US, as well as universities and ISPs worldwide. “The Linux kernel, the core of any Linux system, is certainly more tightly and securely coded than its proprietary counterparts, due in large part to the scrutiny of many thousands of individuals who have built businesses that now depend on the reliability, security and efficiency of Linux.”
Richard Waid, technical director of IOPEN Technologies, agrees that OSS is a better way of building a secure system. “The only way, in my professional opinion, to build a truly secure system, is to allow third parties to attempt to break it.” Waid says voting systems should be left to people who are skilled in building provably correct software design and cryptography. “It’s absolutely imperative that people are allowed to view any documents related to the public good. To have a system that collects votes in an opaque fashion with no hardcopy record is simply ludicrous.”
But Steve Lewin, director of intellectual property at Quantel Business Science, says there are two sides to the argument that open source software is more secure. “If it’s all open source, then people can sneak code in there that may be malicious, but the other side of it is that if that sort of code gets in there, then it’s going to be recognised by somebody else. But my overall conclusion is that it’s a darned site safer on balance having open source code than it is having proprietary code.”
Being able to view the underlying software code can improve security, the information technology strategy and policy consultant at the University of Otago, Neil James, acknowledges. “The more people who spend time looking through the code, the better potential there is for finding holes.”