Q&A: Microsoft hybrid-cloud push promises benefits for IT, end users, pocketbook

Brad Anderson, Microsoft vice president of Windows Server and System Center, keynoted at the company's user conference TechEd North America 2013 this week, dishing out over a span of two hours a smorgasbord of new versions of platforms, most notably Windows Server, SQL Server, Visual Studio and System Center that encourage business use of cloud services.

Brad Anderson, Microsoft vice president of Windows Server and System Center, keynoted at the company's user conference TechEd North America 2013 this week, dishing out over a span of two hours a smorgasbord of new versions of platforms, most notably Windows Server, SQL Server, Visual Studio and System Center that encourage business use of cloud services.

Later he talked about how these changes can help IT staff do its job better, the agility it will bring to end users and how it can also help save expenses. He says that despite Microsoft firing off updates to its major server platforms at a faster rate, IT pros need not fear being swamped. They're already getting hit more often than they might think and Microsoft is trying to figure out how to make the process less painful.

BACKGROUND:Targeting cloud, Microsoft set to revamp major enterprise software platforms 

RELATED:New services bolster Microsoft Azure as key enterprise cloud management system 

COSTS:Microsoft overhauls pricing for Azure Web services 

Anderson sat down with Network World Senior Editor Tim Greene to discuss these issues and others. Here is the transcript:

What's Microsoft's broad pitch in favor of hybrid clouds?

I think with hybrid it brings the ability to take advantage of not only just raw cloud capacity but data and other types of things that exists different places, in different clouds around the world. So in terms of just raw infrastructure as a service it allows you to move a virtual machine or application to a particular service provider or to Azure to take advantage of either economics or just to take a load off your own data center. But in the context data and being able to get inside that data, as you think about  hybrid you can now combine data from multiple places, multiple data sets and really get some interesting insights out of that. There's just two big ways to think about hybrid computing.

What is it about this mass of updates and upgrades announced here make life easier for the corporate IT guy?

The first thing I would say is the hybrid services that we've built – so things like backup, disaster recovery, high availability, the ability to take advantage of a service provider or of Windows Azure to provide you those kinds of assets are the kind of capabilities really with only a couple of clicks of a mouse is something that no one in the industry has done to date.

The second thing that I would point to is every IT organization is being asked by their users to give them more data at a more real-time kind of feed to enable them to gain the insights out of that. And so the things we demonstrated in terms of being able to have rich visualization for your data that really allows IT but more importantly the business leaders – everyone from the CXOs down to the individual who's leading a marketing campaign to get insights out of all the data that's being generated by these applications and Web sites – that simplicity and I think that ease of understanding what is happening can differentiate your business or better serve your customers is world-class and Microsoft is unique in doing that.

Can you pick out two or three specifics that illustrate your point?

Let's talk about Recovery Manager. This is literally the ability to go up and say I'm going to use the Hyper-V replica capabilities to replicate all my virtual machines from my data center – let's use Azure – into the Azure data centers. So now I have a full replica, full fault tolerance, and I can either do an unplanned or a planned recovery, which gives me that confidence, gives me that net below me that my services that I'm offering to the corporation or to my customers is always going to be available. Again that is just literally a handful of clicks of the mouse and it's very, very easy to do. I think that's a big value that shows off those capabilities.

Let's talk for a minute about the consumerization of IT. Every single organization I meet with wants to understand what we can do to help them raise these consumerization and bring-your-own trends. Most of them are using System Center Configuration Manager to do their PC management, and what we talk about is cloud-optimizing System Center Configuration Manager with Windows InTune that now delivers a cloud-based mobile-device-management solution that integrates with your Configuration Manager console. So in terms of simplicity, ease of use to get significant value. You could just use the tools that you're using right now and enable your users across their PCs, their Windows devices, their Apple devices and their Android devices.

The third one I would focus on is the innovations that we've done in storage and that through the software in Windows Server and in System Center we are delivering that highly available scale-out storage all on industry-standard cost effective hardware. We demonstrated tiered storage and of deduplication capabilities that in the past you really had to purchase expensive hardware that required lots of setup, lots of configuration to do. I think we set those up today with maybe 10 or 12 clicks of the mouse, but delivers all of that value at a cost and economical equation that's unheard of because it's all just based on cost-effective hardware.

[Microsoft:Hybrid cloud is good for IT, end users and corporate bottom line]

Let's look at this from another perspective. What will be better for end users if their employers buy into these new broad capabilities?

Love it. First of all let's talk about enabling users to use their device of choice and work from anywhere in the world. With the combination we've done of Systems Center and Window InTune, I as a user now, I can work on a PC, I can work on a Surface or Windows tablet, I can work on an Apple tablet, an Android tablet and I will have a consistent experience across all of those.

The second thing I think really centers around the simplicity and the power that self-service [business intelligence (BI) toolset] that is part of SQL exposed through Excel. The ability to take unlimited amounts of data, diverse sets of data, bring that all together and then bring this rich visualization on it that allows me to wallow in it. I can experiment, I can ask questions and I can literally sit there in a very visual experience, experiment and form hypotheses and theories and learn about what is happening in my infrastructure if I'm IT or if I'm operating a business what's happening in that business and how I can differentiate and improve.

With the self-service, maybe you can talk a little bit about Excel and how exactly that fits in with what you're saying.

Excel is the tool that most individuals around the world use when they're looking at numbers and financials and those types of pieces. What we've done is we've added a couple of add-ons to Excel – one's called GeoTracker, one's called PowerPivot – and this all in-memory capability that allows you first and foremost to take incredible amounts of data – we demonstrated for example a billion rows of data inside of Excel and being able to in sub-second be able to experiment and wallow in that data to get insights. The other thing we've done is we've now also attached that with capabilities coming down from Bing, and that's what gives us the ability to take a look at data in the context of location. We demonstrated the ability to take a look at the individuals attending TechEd, where they're from, what their titles are but seeing in a very visual way modeled on top of a map of the Earth and then be able to experiment or drill down into it. OK let's take a look at the people coming here from Sweden or the people coming from Miami but giving that very visual capability married with data coming off the Internet, for example data coming from Twitter. It helps me understand what is happening and how I can customize the sessions here to the attendees.

Let's talk about the announcements and what they mean for saving money.

This is a place that Microsoft understands deeply, and it all starts with the learning that we have inside of Azure and the 200 cloud-scale services that we operate. I mentioned that we literally operate over hundreds of thousands of servers and we deploy hundreds of thousands of servers every year. So for us just a relentless focus on decreasing complexity and decreasing costs by taking advantage of just industry-standard hardware is a lot of innovation that we're doing in the public cloud and then bringing on premises. Everything from software defined networking, the innovations in storage where I get all of the benefits that traditionally have only come from a SAN but doing it on industry standard cost-effective hardware, the ability to unify my environment from a user enablement and endpoint protection to where I can manage my PCs, all my users' devices as well as my anti-malware on one common infrastructure – all these things drive savings. All these things also drive agility because they're integrated and you don't have to do the work to integrate them. It's done for you.

Other areas of savings?

I would point out some of the capabilities with respect to the self-service BI. It's just Excel, it's just SQL, it's not additional licenses, it's not additional hardware, you don't have to rewrite your application. So you get all the benefits of that self-service's beautiful, rich experience just with SQL and Excel. The cost savings on that not having to purchase incredible licenses and getting this in a real time while it provides incredible cost savings I think the real value is it provides you an opportunity to really advance your business quickly.

So the agility comes at an incredibly low price; it's just incredibly attractive.

One of the things we talked about in what we call Storage Spaces is the interjection of tiering. We demonstrated a scenario where we saw a 16x increase in the number of IOPS which is input-output per second, and is really a measurement of the performance or capacity of your hardware.  By using tiered storage where we have 20 disks and four SSDs delivering the equivalent IOPS of about 360 traditional direct-attached storage devices. That all is just built natively into Windows in the tiered capabilities of Storage Spaces. The other thing is the deduplication. We gave that demonstration specifically in a VDI situation, and we all know that in a VDI environment the No.1 cost is storage because you have every single one of those VMs replicating. But with that deduplication demo that we did we literally showed a 94% decrease in the footprint of the storage. Again that's taking out a significant whack of the largest component of a VDI deployment.  But just like we just talked about not only did that give me a significant cost savings but the performance it gave me because we were able to use the caching from the standpoint of those shared blocks were cached so we actually saw that the actual VDI setup booted much quicker when you were running in a deduplication environment than when you were running in a standard environment. So savings and performance; it doesn't get much better than that.

You make a lot out of "cloud-first engineering" and taking innovations from Azure and placing them in on-premises products. Explain why that is a good thing.

The number of times in my career that we have built something on premises and the needs of the customer go beyond what our scale limits that we put on the product, the number of times that happened if I could count that it would be incredible. I could count that in terms of dozens. When you think about cloud-first, one of the principles that does for us is plan for the kind of scale you would need to scale to if you were delivering a cloud-based service that is being used by hundreds of millions of individual devices. So plan for that scale but then deliver it on premises and then our customers are not going to run into any kinds of scale limits.

Another thing it allows us to do is to really understand how to take advantage of these cost-effective industry-standard hardware components. So we do a lot of innovation just down in the actual fabric of the service. So again as you think about scaling to hundreds of millions of users, to do that and be cost effective and be cost competitive you want to do that on the lowest cost hardware that you can and I think that's an innovation that we've been able to drive with what we've learned in the cloud with our cloud-first principle and then bringing on premises.

The third thing I would say is another core part of that cloud-first premise is develop the software, try it out, prove it out, battle-harden it in the cloud, then bring it on premises. The great example of that is what we've done with the Windows Azure Pack. With the Windows Azure Pack we literally are bringing capabilities like high-density Web hosting, the portal framework that you use when you come up to Azure as well as Service Bus – think about this as our messaging system to keep your applications and services running across clouds. All those were proven first in Azure, battle hardened, scaled – now we bring that on top of Windows Server and System Center through the Windows Azure Pack.

Is there any difference between Azure Pack and Windows Azure Service for Windows Server, a bundle of features Microsoft announced late last year?

It's rebranded.

But it's the same?

Correct. It's an evolution. Service Bus wasn't in the Windows Azure Services for Windows Server. So this is the evolution of that with a name that's easier to remember and easier to say.

Microsoft is pushing hybrid clouds. Can you envision a customer for whom using public cloud is not the best way to go?

I think there's lots of examples of that. If you're in a highly regulated industry, if you've got data sovereignty issues that you may have to comply with – for example if you're a government. Those are certainly organizations where people will have private clouds for as far as I can see into the future. The most common scenario I see right now happening with most organizations they're asking how can I benefit from the public cloud, what's the best way for me to start benefitting from the public cloud. It could be that they take some of their non-mission-critical applications in the cloud or it could be like what Easy Jet talked about. [It placed a seat-assignment add-on to its reservation application in the cloud, but which appeared on the customer's Web reservation page as if it were all a single application.] Their mission-critical application and that data stays in the data center  but then they use the public cloud for doing things like their Web server or extending what their on-premises capabilities can be in a much more simple way and enabling users a different set of capabilities, a more rich and more satisfying experience all leveraging Azure. Another example of a hybrid environment.

When you go with cloud-first, you end up with more updates. Why shouldn't that be a looming nightmare for IT?

In this cloud world the pace is just quicker, so all of us from a competitive standpoint need to figure out how we up our game and be able to get more value out to our customers to advance the business at a faster pace than we ever have before. So I think whether it's Microsoft, whether it's retail organization, a pharmaceutical – all of have pressure to be more agile and to be quicker. That's just kind of driving it all.

The second thing is as we quicken our pace one of the things to point out is we actually have been releasing a lot more frequently than people realize. In the last 10 years we've done six releases of Windows Server. Windows Server 2003 and R2, 2008 and R2, 2012 and R2. So we actually have done six iterations of the operating system in a 10-year period which I don't think most people recognize, as well as if you look at Virtual Machine Manager since we released that in 2007 we've had an annual update release of that every year since then. So there are those things that are driving this and those things are kind of the reality.

We are also doing work here at Microsoft to make it easier for customers to embrace these updates and to deploy them. I think what we demonstrated in terms of the live migration Windows Server 2012 and R2 shows how you can migrate or upgrade from one version of Windows to another version of Windows with zero downtime because it's a live migration. I think those kind of scenarios are things you will see more and more from us.

We definitely take on a responsibility from an engineering perspective to do all that we can to enable a zero downtime upgrade or migration from one version of Windows to the next.

You give the one example of zero downtime is that coming across the board to all the platforms that were talked about?

No. We were able to do that from an infrastructure standpoint as you think about Windows as a guest in a virtualized environment. Now the application that is running inside of that guest is different. Those applications will have their own upgrade processes and cadences and those kinds of things. But in terms of just the raw operating system we're going to do a lot from the engineering perspective to make sure that we simplify and deliver great compatibility so that organizations won't feel like they have to test for two months or six months before they deploy a new operating system from Microsoft. They can embrace these updates and get them deployed quickly.

Initially you pointed out that iterations are coming faster, and that's a good thing. Then you took pains just now to point out that you've already been doing it pretty fast anyway. So which is better?

We are a little bit faster than we have been. The point I make on that is I think many organizations believe we're on a three-year release cadence with Windows. And we actually have been much faster than a three-year cadence.

What do you offer to corporate application developers?

A lot of what we talked about this morning and what we talked about this morning is Visual Studio 2013 really centered up on increasing or decreasing and improving the application development life cycle. There's a lot of conversation in the industry about ALM, application life-cycle management, and what that really comes down to is how fast can a development organization build, deploy, learn, build deploy learn and it really becomes this circle and it becomes this continuous improvement process. A lot of what we announced was around improving the collaboration, improving the ability to see within the code the change history, who made the changes, what impact that change is going to have in the rest of the system and really an individual developer understanding the work that he or she is doing in the context of the broader project with the ability to do real-time collaboration with his or her peers.

They can take advantage of Azure to do load testing. Before I talked about the need to make sure from our cloud-first principals that we can scale the largest number of users possible, well now with what we've built into Visual Studio you can actually take advantage of Azure capabilities to put a significant load on the application that you're developing independent of if that application is going to run in Azure or your data centers, you can use Azure to load test and make sure it scales to what the need is. It's a feature of Visual Studio that ties into Azure. So think about that as an attached service to Visual Studio.

Let's talk a little bit about what we've done in terms of being able to help development interact better with IT and operations and again this is all within the context of that application life-cycle management. Probably the best way to describe this is to describe it in terms of scenarios. Now what can happen is let's say a server gets an exception or has some kind of a problem, instead of IT calling the developer and saying hey I've got an application that is slow or has got  a fault, they can actually with a couple of clicks of a mouse now a VHD of that particular running server have that automatically routed to the developer in Visual Studio they would see the incident come in, they would be able to hydrate that VM because Visual Studio actually ships with Virtual Machine Manager and from System Center, debug that test that, make whatever fixes need to be done and that could actually be redeployed back out to IT and IT could get it deployed. That really is all about decreasing the amount of time it takes to resolve an issue and improving the collaboration and that life-cycle management across development and operations.

Tim Greene covers Microsoft and unified communications for Network World and writes the Mostly Microsoft blog. Reach him at tgreene@nww.com and follow him on Twitter@Tim_Greene.

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags technology

Show Comments