The man credited with doing the groundwork on Microsoft’s trustworthy computing initiative is defiant in the face of suggestions that the company has been slow to own up to its security responsibilities.
Craig Mundie, Microsoft’s technology chief, says he first began working on security issues four years ago. He says the company brought forward introduction of the initiative after the September 11 terrorist attacks in the US and release of the Nimda virus a few days later because customers were suddenly wanting security ahead of product features. Computerworld editor Anthony Doesburg talked to Mundie about the initiative.
Doesn’t the fact that you’ve waited for CIOs to demand greater security put the responsibility on to them, whereas Microsoft should have made product security a priority to start with?
That’s a nice dream. In practical terms the rate of evolution of the attack moves at the speed that the computer system evolves. Collectively we all pull a huge tail around behind us. Right now there are 300 million Windows users; 100 million of them are still on Windows 95. If you were the guy at Microsoft in 1992 or 1993 who was designing the features for Windows 95 you would be designing them against a machine that was maybe a 386 running at tens of megahertz with 16MB of RAM. You make a whole lot of design decisions — strength of encryption keys, for example — based on the capabilities of the machine.
[Resisting] the kind of attack you can mount with today’s computer against a system designed in 1993 and deployed in 1995 is sort of an impossible mission.
It’s like saying in conventional warfare that I think I know what the bombs and bullets look like; I’ll go and build a bunker. It’ll have one-foot thick walls and be eight feet under the sand. Then along comes a guy with a bunker buster bomb and, boom, you’re dead. Did the guy who designed the bunker do a bad job? Well, he only designed for the capability of the threat he knew.
That’s in part the problem Microsoft has. We have to design within the limits and the threats that are known at the time. Which isn’t to say we don’t make mistakes — we do. That’s why the third part [of trustworthy computing] — security in deployment — is so important.
Are you encountering any cynicism that Microsoft has launched this initiative?
I’ve been quite pleasantly surprised that the worst the cynics could say is why didn’t you do this three years ago.
Isn’t that a pretty sharp point?
Relative to what we usually hear about what they think we’re up to and how nefarious we are, it’s really quite complimentary … because they actually agree we’re doing something quite interesting.
We’re a business and in some sense we do what the marketplace pays us to do. We recognise that to some extent we have a responsibility that transcends that but there’s no doubt that so far in business we’ve been quite well rewarded by the customers for the balance that we chose.
Which isn’t to say that we wouldn’t love the world to be perfect; that we wouldn’t love to say that we never had these coding problems; that we wish all these programmers in the world that learnt about C would be trained in a different language so they didn’t have a propensity to code buffer overflows. But some of these things are deep cultural problems and every company has them right now.
Microsoft happens to get a lot more visibility because our surface area is pretty large. But if you do an analysis about actual bug rates, as many non-Microsoft people do — even discounting our surface area — in absolute terms our vulnerability in these operating system process years is less than or equal to almost any other thing, including all the open source products; and per line of code, we’re almost six times better.
So we know we’re not perfect but we’re committed to making huge improvements and I think we will raise the bar … but only time will tell.