Microsoft security exec talks of bugs and threats

Six years after Trustworthy Computing was lauched, Scott Charney says progress has been significant

As corporate vice president of Trustworthy Computing (TwC) at Microsoft, Scott Charney is among those at the helm of the company's long-standing efforts to improve the security of its products. In an interview with Computerworld, Charney — a former federal prosecutor of computer crimes and an assistant district attorney in Bronx, New York, before that — talked about TwC, the changing threat environment and what security fears keep him awake at night.

Does it frustrate you that Microsoft still gets a pretty bad rap on security despite some of the initiatives the company has taken in recent years?

It depends on what the criticism is. The other vendors are doing things but, to be blunt, I don't think any vendor has done as much as we have done. In fairness, a lot of people have given us credit for that. We used to be the laughingstock of security, and now you read all sorts of articles and analysts' reviews saying you should follow Microsoft's lead.

The challenge is really quite often in dealing with unrealistic expectations. We still have vulnerabilities in our code, and we'll never reduce them to zero. So sometimes we will have a vulnerability and people say to me: "So the [Security Development Lifecycle (SDL)] is a failure right?" No it isn't. It was our aspirational goal that the SDL will get rid of every bug. But let's get realistic for a minute: It's not a realistic goal.

Sometimes you get these questions, where people say: "You have invested all this money and effort and you talk about the SDL, and you are still not perfect," and I don't think that's a fair criticism. Look, that bridge in Minnesota just collapsed. How long have we been building bridges? We know how to build bridges, right? Sometimes people just have unrealistic expectations of what we can do.

It's been close to six years since Microsoft launched its Trustworthy Computing initiative. What has its biggest contribution been?

The biggest contribution in the security space has been the [SDL]. We have processes in place now where we build documented-threat models at design time. And as you build and architect code, you are always mitigating against these threat models. The threat models get updated during the course of development to keep them current. At the back end of the process, we have a final security review where we look at the product and all the bug scrubs and all the work we have done to see if the product is ready to ship from a security perspective. This, I think, is the biggest change. If you look at our vulnerabilities year over year in product after product, our vulnerability counts are going down dramatically as our products get better.

Vista is the first operating system that's gone through the SDL process from the beginning. Are you satisfied with the impact SDL had on Vista security?

Yes and no. First of all, I am satisfied in the sense that the vulnerability counts are down for Vista over the comparable periods in XP. We also know that vulnerabilities won't get to zero with complex code, written by human beings and all of that. So the question is where is that sweet spot and have we hit it yet? And my sense is — not yet. We need better-automated tools to find bugs, which are a big issue for the entire industry.

We have lots of tools, but I would not say that tool sets have reached complete maturity and that we and the industry have done the best that we can do. Human code reviews we do a lot of, and we do find things, and that's great. When you throw humans at the problem, they spot certain stuff and they miss certain stuff. But they don't scale well as code bases get really large. So I think the tools can get better, and I think we can continue to get better. I think Vista overall continues the progress, but we need to continue to focus on automated tools.

How is the threat landscape evolving?

Over the last few years, as vulnerabilities have been reduced in code, bad guys are adapting. So you see, for example, a lot more social engineering. Lots of consumers who are victims of ID theft get taken because they follow a link that looked legitimate in terms of the URL, but actually the URL goes out to some Eastern European location where they are asked to enter their username and password. There's no vulnerability exploited with social engineering.

Studies for a long time have also shown that a lot of exploits that are successful at companies are either insiders who are exploiting their authorised access to systems or because of system misconfigurations. Large organisations run very heterogeneous, complex systems, and it's easy to make configuration errors that can be exploited by people. And so it is really a combination of things. The key is to understand who owns what piece of the risk.

What should companies be doing to mitigate the portion of the risk they own?

There are two risks. One is the risk to their company [that] they may have decent data on and have their arms around. Then there's the risk created by interdependencies, which are very very hard to measure in some contexts.

In the post-9/11 world, the financial sector had a rude awakening about the impact of telecom to their businesses. They could have been up and running sooner [after the September 11, 2001, terrorist attacks], but this other infrastructure was lost. Understanding those kinds of interdependencies is hard. Having said that, I think that customers do have a greater appreciation and awareness of cybercrime issues and I think more and more companies are doing a better job of getting documented information security programmes in place — and making sure they are putting in the right mitigations and doing defense in depth.

But there are challenges. One is [that] the threat model changes. You have to constantly think about whether the mitigations you have are adequate for their environment. The other thing is that business models are changing — everything from moving from traditional phone networks to VoIP, offshoring, global sourcing, anywhere access, the de-perimeterisation of the network. All of those business changes require them to think about the risk model and how it changes what they are doing and how they need to adapt to mitigate those risks.

How would you characterise Microsoft's relationship with the independent security researcher community at large — the ones who are doing a lot of the research and uncovering all those flaws in your products?

I think it's a lot stronger and better, because we embrace the community and because we recognise their value. There are those in that community who engage in responsible disclosure. They find things that are bad, and they report it to us and leave it up to us to get it fixed. They are not only pursuing their passion but helping secure the whole ecosystem.

There are, of course, some researchers who don't engage in responsible disclosure. That is, they find bugs and they publish exploit code. That's frustrating because there's part of me that says you're hurting so many people by doing that. But our relationship with the community overall is a lot better than it was.

When it comes to security, what about consumers and the risk they pose to the ecosystem? They play a huge part, and it is a somewhat challenging situation. One of the things I talk about often is my mom, because she is 78 and she's found email. I remember encouraging her to get broadband because she was using dial-up. I told her she really needed to get broadband, but to make sure to have a firewall — and she asked me why broadband causes fires.

The reality is my mom doesn't want to become a system administrator, and she does not want to become a security administrator. You have to educate consumers not to make mistakes like clicking on attachments from unknown sources and not following links and all of that. At the same time, we know users will click OK on any dialogue box, and you have to find a way to manage these things. It is really critical that the IT industry does a much better job of what I call security usability. As I said, my mother does not want to configure a firewall; she doesn't want to have to manage her antivirus. She wants it to be like the telephone or the television, where she turns it on and it works.

So who is responsible for securing consumers?

One of the reasons that enterprises are secure is that they have a CIO, a CSO and people dedicated to making sure the network is functioning and secure. Who is the CSO or the CIO for the consumer? The answer, of course, is not simple.

Some access providers have the ability because they are the point of entry to the internet to do network access control and provide tools to help keep their customer clean, and some are doing that. And then vendors who are on the desktop certainly have an obligation to produce more secure code and be more manageable. So it's kind of a shared responsibility between the consumer, the access provider and the vendor. It is not really an equal partnership. There have to be clear roles and responsibilities. Consumers feel they are educated about responsible behaviour online — and they should be — but they can't remove vulnerabilities from the code. That has to be our job. So when you think about defense in depth, different things happen at different places, so you have to be clear about who owns each point.

Several high-profile data breaches have prompted some to call for government action. What kind of role should the government have?

I think data-breach laws are a good idea, and Microsoft has actually been an advocate of federal law in this area. The real problem is: can the laws be realistic and manageable? At times, the government has said maybe we need a product liability law for software. OK, what would that law say? That you should build bug-free code? That can't be right. That you should use reasonable practices? I think with the SDL [Software Development Life Cycle] we are doing that. So what would you have me do that I am not doing today? And is it to allow regulators to look at what we are doing? Or is it to allow individuals to pursue class-action suits, in which case we would have to divert a lot of the money we are spending on security to spend it on legal fees and lawyers because it is going to create a huge industry? And what do you do with the developer in the garage? I mean one of the great things about IT is the low barrier to entry. When you put a product liability regime around something, the low barrier of entry goes away. And what would you do with the open-source, not-for-profit company? You can't hold Microsoft liable because we are a commercial entity with shareholders and not hold Linux liable for making the same mistakes.

What part can the government play?

There are other things the government should be doing — for example, basic research and development in security sciences. The work they are doing with configuration acquisition guidelines has made security more robust during the acquisition process. One of the things that we and the government are looking at is, instead of creating [security certification] documentation for products after the fact, let's rely on the documents and artifacts created in the actual development process. Because those are thing that really indicate the quality of what has been built.

The federal government has begun mandating the use of highly standardised locked-down Windows configurations to bolster security across agencies. Is this a model that needs to be adopted in the commercial and consumer space, too?

I am a big fan of standardised configurations. Microsoft worked closely with the [US] Air Force to originally set the high security configurations, and that's what [federal CIO] Karen Evans and the Office of Management and Budget has been building upon. It provides for easier management, it is very cost effective, it provides a higher level of security in the consumer space.

We have, in fact, started shipping products that are secure by default, which is where we turn off a lot of services to ensure that when a product is put in the marketplace, it is more secure. The challenge, of course, is when people buy technology, they buy it for functionality, not security. You can ship an operating system that doesn't even boot up — it's that secure. So the question is in the consumer model — what things do you enable by default? Most people buy computers and connect them to the internet so they can shop on the web or do research, or read email. So you're going to have to open up a lot of ports, and you are going to have lots of services running. So just shutting down the functionality doesn't work.

What keeps you awake at night?

Two things. One is complacency. You know, at Microsoft, as we've invested all this effort around security, our numbers have started getting better and better and more and more people are like, "Microsoft got its act together, and others should follow their lead," technologists say, "OK, our job is done — what next?"

What I explain to people is that this isn't actually a technology problem we are solving; it's a crime problem. I was in law enforcement for 19 years and aspirationally any law enforcement guy will tell you my job is to put myself out of business. They will also tell you there is no risk of that happening, not in the foreseeable future. Crime has been around since time immemorial, and it is not going away. So the first thing is, we can't get complacent because bad guys are adaptive. The other is the evolving-threat model. In the early years of internet crime, it was mostly young kids exploring networks — many of them actually did no damage.

But the internet today is global, it is anonymous, it is hard to trace people, and it has very rich targets. As we do more and more things online, and the criminal population knows they can execute crimes globally with little risk of being caught, they are going to migrate to that environment — and they have. So that threat model is challenging to law-abiding people. The classical deterrent of Neighborhood Watch, police patrols and, ultimately, a jail sentence doesn't apply as easily on the internet, if at all.

Join the newsletter!

Error: Please check your email address.

Tags MicrosoftSecurity IDcharneytrustworthy computing

Show Comments
[]