Craig Mundie assumed the position as chief visionary at Microsoft in June 2008, after Bill Gates retired from day-to-day operations. Will belt-tightening require Microsoft to scale back on basic research, as some competitors have?
No. We have an opposite view, which is the tighter the economic times, the more the focus has to be on maintaining your R&D investment. You want the normal cycle to produce timely results, but we've always believed that the pure research component was critical to us. It gives us the ability to continue to enhance the businesses we're in [and] to disrupt industries that we choose to enter, [and] it's a shock absorber that allows us to deal with the arrival of the unknown from competitive actions or other technology breakthroughs. How do you balance the need for backward compatibility with the need to innovate?
There are several ways we seek to insert innovation into products. First, we develop new features for the products we have. Second, we create new products and put them alongside the products we have. As we look in the platform and tools areas toward the future, we expect that there's a lot of change coming in the underlying architectures – for example, like virtualisation and the fact there will be many CPU elements that actually allow some of the side-by-side execution of things. That can provide perfect compatibility, while still allowing the introduction of whole new capabilities. What is your proudest R&D achievement?
I did a lot of the early work in non-PC-based computing. Things that we have today that have matured into our game console business, our cellphone business, and our Windows CE-based, Windows Embedded, and Windows Mobile capabilities all started in the groups I formed here between 1992 and 1998. I look at the progress we've made there, and I take quite a bit of pride in the fact that we anticipated those things and we were able to get into so many of them. The company's strategy was to recognise that, ultimately, people would have many smart devices, and we wanted to have some cohesive way of dealing with all of them. What will be the next big technology wave?
What happens in waves is the shift from one generation of computing platform to the next. That platform gets established by a small number of killer apps. We've been through a number of these major platform shifts, from the mainframe to the minicomputer to the personal computer to adding the internet as an adjunct platform. We're now trending to the next big platform, which I call the client plus the cloud. That's one thing, not two things. Today we've got a broadening out of what people call the client. My 16 years here was in large measure about that. And then we introduced the network. The internet was a place where you had web content and web publishing, but other than being delivered on some of those clients, the two things were somewhat divorced. The next thing that will emerge is an architecture that allows the application developer to think of the cloud plus client architecturally as a single thing. In a sense, it is like client/sever computing in the enterprise. What the world is searching for now is the right combination of underlying technologies and some killer apps, which will demonstrate the capabilities of this integrated end-to-end view of the cloud plus client that will enable things that the world hasn't seen yet. What technologies will drive this?
The technologies come at two levels. What are the underlying shifts in the lower-level platform technologies that will allow that to happen? And what are the things that might change the user's experience in some fundamental way? There are two big things that form the nucleus of those two big changes. The microprocessor itself is going to change to this heterogeneous, many-core capability over the next four or five years. To get performance, you're going to have to write parallel applications, and if it's cloud plus client, you're going to have to write distributed parallel applications. Those have historically been viewed as hard problems, but they will have to become de rigueur in the future. The second thing is the technologies of man-machine interaction are evolving and will be aided by the quantum change in computational capabilities, [so] for the first time, client devices will be able to implement natural, more humanistic ways of dealing with people. We call that next era the natural user interface. Think of it as the successor to the graphical user interface. And yet you have to figure out, what are the killer apps? In the era of client plus the cloud, what will be the role of Windows?
How relevant was Windows when you thought the world was DOS? The answer is it became pretty relevant. That's the way I think about this problem now. We're going to move to a new platform with new models of human interaction solving new problems at a higher level of abstraction. The operating system will be the thing that creates the mapping between the physics of the computing environment and our ability to write these applications and portray them for people. You may not have the same direct association of the operating system as a part of the application. This doesn't mean you won't have clients in screens. That correlation remains. I'm describing a world where there's no less of a requirement for controlling complex hardware that arguably will get even more complicated. But the boundary between what the user associates as the app, what part lives in the cloud, what part lives on the device in their hand – those boundaries will be blurred. There will be a new class of apps, and I think that those will be as different as the difference when we moved from the command-line interface to the Windows model. That's the way I see the future.