Cloud computing has evolved beyond basic SaaS, IaaS, and PaaS offerings, as the cloud matures to become the engine of enterprise technology innovation
Stories by Eric Knorr
In August, HP announced it would spend US$10 billion to acquire Autonomy, Britain's largest software company — a leader in enterprise search, compliance, and cloud archiving with an impressive list of Fortune 50 customers. A classic Silicon Valley soap opera ensued, starring none other than Larry Ellison, who in a September earnings call accused Autonomy founder and CEO Mike Lynch of shopping Autonomy to Oracle prior to agreeing to the acquisition by HP. Lynch has denied the charge.
HP earlier this week announced the deal was formalised: so what can we expect from the union of HP and Autonomy?
In my interview with Lynch, we covered a range of solutions that will likely flow from Autonomy, which he says will remain a "fairly independent" independent business division of HP. Here's a quick summary of what Autonomy brings to the party:
A very big public cloud. According to Lynch, Autonomy has a huge hosted e-discovery and archiving service: "A lot of commentators have missed that Autonomy's cloud business is now very large. It's now about 30 petabytes. And that's heterogeneous data — it's desktops, it's messages — so what you've got is a great resource if you've got questions about what's going on in a company.
Visual recognition. Last spring, Autonomy debuted its Aurasma software for smartphones, which — using a smartphone's camera -- recognizes images and objects in the real world and enables you to "interact" with them. Lynch explained how that could extend to HP's printing business. "We've got cloud-based document management, which comes as part of the printer. The Aurasma technology will be linked to the printer, so basically, whenever you print something, that image can then be recognized and linked to its virtual version. And whenever you print something out, that print version can become interactive -- you just hold a smartphone up to it."
Search and discovery appliances. Appliances using HP hardware were top of mind for Lynch. He believes that plug-and-go search and discovery solutions, coupled with HP's huge sales force, will expand the market for Autonomy's technology -- which has traditionally targeted large corporations — to vast numbers of small and medium businesses. "We have 200 salespeople," says Lynch. "Now, with the whole of HP's channel, it's 28,000 people."
Integration with Vertica. At Leo Apotheker's coming-out party a year ago, Vertica, a columnar database solution, was the only technology demo on display. According to Lynch, "That is a really clever piece of technology. So basically what we're going to be doing is putting Vertica with Autonomy, and in one lot, you'll be able to do not only SQL queries, but structured/unstructured at the same time. Unstructured information growth is so high now, and it's becoming such a core part of what we have to do within the enterprise, that it's time for the database to be eclipsed by something that can handle both rather than just one type of information."
Vertical software. Lynch said that the ability of Autonomy software to manage unstructured data will be a boon in a number of vertical areas. "You've got things like HP Healthcare. And what we can do now is drop in some of our really neat unstructured information health care technology — things like the Auminence Diagnostics tools. That's really nice stuff that could give some differentiation to those products."
As a technologist and entrepreneur, Lynch is an evangelist for his own special brand of unstructured data management. His core technology is based on Bayesian probability, an 18th-century theory that provides the foundation for what he terms "meaning-based computing," where systems "understand" unstructured information in part by identifying clusters of conceptually similar information.
With HP offering an unprecedented opportunity to scale out, Lynch has no shortage of ambition. "I've got is the wonderful ability to open a new chapter," he says. "This time it's about turning over the basis of the enterprise software industry in the biggest move in its history. Every other change in its history has been about the technology — the 'T' in IT. It's been about mainframe or client-server or even cloud. What we're actually talking about here is information changing — it's the 'I' — and that's a phenomenal change. And to be able to do that with [a company] the scale of HP involved is really exciting."
The spectre raised by Nicholas Carr in 2003 – that IT doesn't matter – has risen again, summoned by the two prevailing trends of the day: cloud computing and the consumerisation of IT.
IT managers and CIOs today would do well to read the original Harvard Business Review essay, in which Carr argues that IT is becoming a commodity the same way rail transportation or electric power did. The essay has well-known flaws, the worst of which is Carr's narrow characterisation of IT as network, compute, and storage infrastructure. But in at least one respect Carr was prescient: The commoditisation of those core infrastructure functions is now taking place.
For an increasing number of workloads, it matters less and less whether you spin up VMs in Amazon's datacentre or in your own – or even whether you licence applications on premise or rent them from an SaaS provider. Today's key questions are "How fast can I get it?" and "What's the TCO?"
At the same time, CIOs and IT managers are under assault from a commoditising force Carr never anticipated: Consumer devices that users bring to work. IT has been forced to accommodate mobile devices tied to commercial networks because smartphones and tablets deliver huge gains in productivity.
Those who try to erect a Maginot line against commoditisation, and insist that all IT from infrastructure to mobile devices must stay under their complete control, hobble their business' competitiveness and limit their careers. At the same time, no company would tolerate the chaos of lines of business buying and deploying their own technologies without regard to security, integration, or economies of scale.
Finding a middle ground between those extremes is part, but not all, of becoming a modern CIO. We are entering a period of accelerated change, one that includes the break-up of the Windows desktop paradigm. Here is my advice to CIOs, IT managers, CTOs, and other technology leaders:
Become a technology strategist. The era of the CIO who simply "keeps the joint running" is over. Just as good business strategists need to think beyond the next quarter and explore new opportunities, IT leaders need to look for emerging technologies that accelerate innovation, from promising cloud applications to internal app stores to advanced virtualisation management. Standing still isn't a safe place to be anymore.
Build a service catalogue. Gone are the days when you can simply serve the business stakeholders who bark loudest with one-off, end-to-end infrastructure and apps to meet their needs. Technology leaders need to step up and say: "You want to drive the cost out of operations? Then give me the resources up front to provision shared services and the authority to make every appropriate department use them so I get maximum economies of scale". Embrace commoditisation when you can and you'll free up resources.
Cultivate your developers. When infrastructure becomes commoditised, developers are the big winners. Development, test and deployment cycles shorten dramatically, leaving more time for developers to interact with the business, engage in agile practices, and create applications that accelerate business processes. Coming out of a disastrous recession, the number one imperative is to jump on new business opportunities. Create a development culture where you can deliver apps to meet that challenge with all appropriate speed.
Practice postmodern security. Networks are permeable. In fact, most are already infected. The perimeter still needs to be protected, of course, but concentrate your efforts on authentication, access control, encryption and other security technologies that protect data and applications.
Empower your users. In most businesses, the most valuable employees are often the ones who have the initiative to provision their own technology. If they're not going to wait for IT to build what they want and go to the cloud instead, don't clamp down; help them find the right providers and create a framework for provisioning instead. Rather than ban mobile devices, create policies that enable people to use them safely – and explore new technologies like mobile client hypervisors.
The truth is that every part of IT matters – but a smooth-running, elastic infrastructure is the new baseline. To stay strategic, CIOs need to drive cost out of infrastructure and shift investment to technology and development that grows the business. And when IT makes users its ally, and shares control over technology, IT isn't diminished – it just broadens and deepens its integration with business.
Knorr is editor-in-chief at InfoWorld
Imagine being CIO for Intel. You serve over 90,000 employees scattered around the globe, many of them hardcore technologists happy to second-guess any decision you make.
Intel CIO Diane Bryant doesn't seem to be buckling under the pressure. In a recent interview with InfoWorld's Doug Dineley and myself, Bryant — who joined Intel in 1985 and worked her way up through the ranks — clearly laid out her vision for making desktop virtualization a key part of Intel's long-term plans to serve its internal users.
Dineley: What caused you to consider virtualisation on the client side?
Bryant: The big change I've seen in even just the two years since I've been CIO is the plethora of devices. Intel calls it the "compute continuum." It used to be you would come to Intel and get a desktop, and in '97 you got a notebook, and then smartphones. Now there are all kinds of devices that people are looking to bring into the environment: netbooks and tablets and all kinds of things. In January is we opened it up and we said: "If you have a smartphone, and you're willing to sign a waiver that you're going to have Intel-confidential information on your personal device, we will push contact, calendar, and email onto your smartphone."
Knorr: Can I ask you what the waiver says?
Bryant: I'm not a lawyer, so I'll paraphrase. In general it says: "If you lose your iPhone you have to immediately call Intel and we will wipe the phone, which means we will wipe the Intel information and we will wipe your information."
Knorr: That capability has to be enabled before somebody can use it?
Knorr: Yes. We put a password on the device and we have remote wipe capability. This is where client virtualization comes in...
Dineley: So you're an Exchange shop and you're using ActiveSync?
Bryant: Yes, so it's very low cost for us. And to your point, how many iPhones: In January when we launched we had 8,000 BlackBerrys; we now have 9,000 employee-owned devices in the environment. The vast majority are iPhones. And we all know how often we use our handheld devices to stay connected — those hot emails that you need to reply to. Now we have that many more employees who are that much more productive. The survey feedback says that they save 30 minutes a day because they have that information on their devices. When they walk down the hallway trying to find a conference room, they don't have to open their notebook, they just look [at their handheld device]. It was a huge productivity gain, but from a client virtualization perspective, the employee had to sign up and say: "You can wipe my device." Where we want to get to is a secure, virtual partition on your smartphone device [so]when you lose the device I can wipe my VM and your personal data remains intact.
Knorr: You can't do that with an iPhone.
Bryant: You can't do that today on anything. That is what we're actively working on today. That's one example of why client virtualisation is so key and why we're enabling it not just for your desktop or your notebook, but to be able to support secure partitions across a full range of devices that you may have or that I may want to buy for you.
Dineley: So you're a big believer in the client hypervisor?
Bryant: I am. I'm a big fan of it. But also I think what's more important is that virtualisation technology is a foundational technology that enables many different use models.
Dineley: So not all Intel employees are going to have the same thin client?
Bryant: No, because all Intel employees are not the same. That's the same in any large corporation. In the old days you ignored that fact and you gave everybody the same device. Today you don't have to do that anymore. You can say: "You're a factory worker, here's the best device for you to be productive. You're a sales guy, you're on the road, you're always mobile, here's the best device for you. You're an engineer cranking massive computations, here's the best device for you." We've definitely gone to a segmented population, giving the best device to the employee based on their needs or multiple devices based on their needs.
In most cases we want the VM on the device, because you're not always connected. If the virtual machine is off in the cloud, there's an assumption that you're connected in order to be productive. That's just not a reality, so in general we want a rich client with the VM on the device.
There are cases where that isn't the best solution, though. For instance, our training rooms. We have large training rooms around Intel worldwide — [with all] those desktop machines, how nice it would be if you didn't have to send IT guys out to maintain and update those machines. You just hold a virtual container out in the cloud, a virtual hosted desktop. It's a static solution, it's not used very frequently, but when it's used you want it to work. You don't want the employees coming in to be trained and the silly desktop doesn't run. So that's a great example of hosting out in the cloud. We have a proof of concept going on that demonstrates the lower total cost of ownership for IT.
Knorr: So client hypervisor is the model you're really going for. You've decided that VDI with a constant connection is not practical, except in these training-room type environments.
Bryant: Or for some of the factory workers. I think the bigger point is, it's such a heterogeneous environment, with very different needs, across the population and across the devices — not just by employee but by application. For instance the sales force, which is incredibly mobile: We're rolling out a new CRM solution for them. One would have thought that we could do something that is SaaS or web-based, that they're going to be connected, [But] the feedback from them is: "No, you can't assume I'm always connected." They want it on a small form factor, mobile device, but they want it local. So this is our exploration process: "Who are you? What app do you need? And when and how do you need it?" Then I have to figure out what's the right secure, virtualized solution to deploy that app to you.
Dineley: Do you see the desktop management problem as primarily managing virtual machines on client devices or primarily managing virtual machines on servers that are accessed by client devices? And in some cases those virtual machines travel to client devices and back. What's the central hub for managing all of these VMs? Is it a VDI server farm or is it some other kind of solution?
Bryant: For the majority of the cases, where almost every employee has a notebook device — we have 90,000 notebooks — in general for these notebook devices it's going to be local to the device. A VM loaded on the device. But you can also look at some handheld devices that don't have the capacity for that and they're going to be hosted in the cloud. You have to look at what is the device, what is the app, what is the use model...
Dineley: Are most users mobile?
Bryant: We need to assume that most users are going to pick up their work and leave and that they're not always connected. That's a kind of a baseline assumption. Back in 1997 we went from desktop computers to notebook computers to do exactly that — to allow you to be mobile, to allow you to work from your kid's soccer field if that's what you need to get your job done. The assumption is that people always want to be able to pick up their work and leave with it and they aren't always going to be connected.
Dineley: This is a difficult problem, isn't it?
Bryant: [Laughs] I'm not trying to be cagey, it really is complex.
Knorr: We keep interrupting you and making you stray from your narrative, but basically it sounds as if you're saying your practical implementation of desktop virtualisation is awaiting a robust client-side hypervisor.
Knorr: And that desktop virtualisation in the current deployment is not really widescale at all.
Bryant: No, it's just beta.
Knorr: So client-side virtualisation is the gating factor for desktop virtualization for you. Is that fair to say?
Bryant: Yes. We're in test [mode]. We're in pilot. We have 20,000 contractors at Intel at any one point in time. Today — except for this pilot program we have in India — we give them an Intel notebook with the Intel load on it. It's very expensive, because that contractor already has a notebook his company gave him. So we said, hey, let's do a pilot in India and say from now on, when we hire you as a contract worker, bring your company-owned notebook in and we will take a USB and load a virtual machine onto your notebook with the Intel load. It's secure, it's partitioned from your corporate load, and then when the contract is over we delete it. We now have a couple hundred contract workers in India — we have a large Indian design centre — [who are] working on this and it's working pretty well.
Knorr: And this is based on a client hypervisor or...?
Bryant: It's on a client virtual machine. We try not to talk about suppliers.
Dineley: Is it a bare metal hypervisor or Type 2 virtualization?
Bryant: [Laughs] That might narrow it down a bit, wouldn't it? But it's in beta, it's working, and it will save us $1,200 per notebook. The savings to Intel for the same capability to that employee are tremendous. The other [case] is mergers and acquisitions. Say we acquire a company and ideally day one when they show up you want them to be productive. They have their old company notebook with their old company build. We plug in a USB, drop down a VM, and then — we call it "day one up-and-running" -- we have it running in an Intel environment.
We have demonstrated successes that tell us [the advantages of] client virtualisation. If you have a device that you love, and for me to take on that burden of giving you the device you love... just bring your device to us and I'll drop a VM onto it. You can have the Intel load in our VM running on our OS and I can trust it because it's secured from your personal information. I no longer have to back up and save your personal family photos — which I have to do today, because I know you put your personal photos on your Intel notebook and I have to back it up. So it's secure, I only worry about my VM, you worry about your personal stuff, you get the device you want, Intel remains productive...that's the direction.
Knorr: There's an argument to be made that you're paying twice. Shouldn't everyone just have thin clients? Otherwise, you're paying for a powerful server to host desktop virtualisation on top of desktop or notebook computers.
Bryant: It all comes back to your use model. If your employee is tethered to the desk in a closed environment and they have access to X applications, then host those applications in the cloud, in your datacentre. That makes sense. But you put a box around what that employee can do and where they can do it.
Knorr: Did you look at the other solutions evolving now: Google Docs, the ability to work offline with HTML5's local storage model, that kind of thing?
Bryant: The environment is evolving very rapidly, because that model — those office applications hosted in a cloud — used to work only when connected. Now they have offline features and capabilities, so you've opened up a new opportunity for delivering applications to devices. We do proof of concept on cloud-based applications all the time.
The issue with that is, as a large enterprise I have a very large infrastructure — I have 100,000 servers in production — and so I am a cloud. I have the economies of scale, I have the virtualization, I have the agility. For me to go outside and pay for a cloud-based service...I can't make the total cost of ownership work. And most of my peers can't either. That's why most large enterprises are focused on building cloud capability — agility and scale — inside, with their own infrastructure.
Knorr: What about client virtualisation for mobile devices? You may have more insight than we do about how this is coming along, because currently, it doesn't really exist.
Bryant: You mean smartphone, handheld-size devices? There are various startups that are building handheld-based virtual solutions and we are absolutely out looking, helping, testing. Because that will be the key. Today I can only put email and calendar [on the device] — and I strip the attachments because that device isn't secure as an enterprise device. I've made you happier, because I'm letting you use your personal device, but I've limited what I can give you because of security. I have to protect those assets at all costs. As I'm able to put a VM on that device, and I can secure that device, then I'm able to give you greater and greater access to Intel's data and apps.
Google's valuation now stands at US$124 billion. How big is that? For reference, IBM is worth $173 billion. Once the big, friendly St. Bernard of tech companies, Google has turned into Godzilla overnight.
Up until now Google hasn't seemed terribly serious about the business software market. Gmail has made some headway in business, but the paid version of Google Apps — which includes Docs, Gmail, Sites, Wave, and a three-nines SLA for $50 per seat per year — can't be too successful, because Google still won't say how many customers have gone for it.
So assuming Google really does want to get serious, why not buy Salesforce.com? When it comes to successful SaaS (software as a service) applications, there's Salesforce.com and everyone else. One big reason is that, unlike most web-centric companies, Salesforce really knows how to sell applications to business, something Google is just beginning to learn. Plus, thanks to an alliance struck in 2008, Google Apps is already integrated with Salesforce.
Google recently provided yet another indication of the two companies' converging interests. Just as VMware and Salesforce struck an alliance last month to enable Java applications to run on the Force.com cloud development platform, VMware and Google announced a similar arrangement for the Google App Engine platform. (Java apps already run on Google App Engine, but the VMware deal makes it easier to migrate them.)
Ultimately, larger businesses aren't going to make any major moves to the cloud unless they have a platform for developers; after all, custom-built apps always seem to make up half of enterprise applications. Now, Salesforce's Force.com is a great platform, but it doesn't appear to have much traction outside of Salesforce's existing CRM customers. Google App Engine is just getting off the ground, but it has the company's enormous resources behind it. Imagine if the two joined forces.
With Google Docs, Salesforce, dev in the cloud, and Google's monster infrastructure, you have the makings of a powerhouse cloud ecosystem. Let me be clear: I have absolutely no inside information. Salesforce is valued at $9.8 billion, which is quite a meal even for Godzilla — and more than three times the $3.1 billion Google paid for its biggest ticket item so far, Doubleclick. And I have no idea how Salesforce's vociferous CEO Marc Benioff would react to such a proposition.
But if Google wants a piece of the action in this area, what other choices does it have? Microsoft is fighting back against Google Docs with Office Web Apps. And Google has no play at all in the enterprise application market, which according to the research firm AMR hovers somewhere north of $60 billion in size. Salesforce claims that it's "the CRM choice of 72,500 companies." If Google wants more than a token number of paying business customers in the near term, it's going to have to buy them.
Desktop virtualisation harks back to the good old mainframe days of centralised computing while upholding the fine desktop tradition of user empowerment. Each user retains his or her own instance of desktop operating system and applications, but that stack runs in a virtual machine on a server -- which users can access through a low-cost thin client similar to an old-fashioned terminal.
Not to put too fine a point on it, but as decades go, the 2000s sucked. It's hard to imagine a worse beginning than the dot-com bust followed by September 11. And the Great Recession as a grand finale?
Yet somehow technology kept barreling along. In business, the shift from client-server to web, from proprietary and expensive to open and commoditised, was stunning in its swiftness. The impact on IT was a little more chaotic than we might have liked, but plummeting costs had the effect of driving technology into every corner of the enterprise. Predictions of IT's irrelevancy proved exactly wrong.
Looking back on the '00s, I found it pretty easy to pick the five technologies I thought had the greatest impact on business. Remember, these weren't invented during the decade, but all of them most certainly came into their own in the '00s.
1. Linux. If you were going to name the '00s after any single technology, you might as well call it the Linux decade. The first Linux kernel was released in 1991, but mainstream enterprise adoption of Linux was decidedly a '00s thing. Not only did Linux open up a whole new role for x86 hardware, it changed the economics and development model of the software business forever.
2. XML. First recommended to the W3C in 1998, XML didn't really get rolling until 2002 or so. Today XML is the universal standard for document and data exchange, enabling everything from enterprise application integration to RSS. Every major commercial DBMS now claims "native XML" capability. The degree to which different business systems can exchange data smoothly may be pathetic compared to what it should be in 2009, but XML gets much of the credit for the inroads we've made so far.
3. Server virtualisation. I have a vivid memory of Diane Greene, then CEO of VMware, sitting in InfoWorld's offices in 2004 explaining how virtualisation worked. It was an "oh wow" rather than an "a ha" moment, although I can't pretend to have guessed the impact the technology would have. The idea of divvying up one server into many virtual machines seemed more like an academic exercise than a commercial boon, until I understood how desperately underutilised most servers were. We may never again witness anything like the pace at which server virtualisation has been adopted. Enterprise IT normally doesn't jump on anything that fast.
4. Rich Internet applications. A grab bag of technologies, including AJAX and Flash, enabled web apps to replace client-server applications across the enterprise. As long as programmers avoided browser-specific features, new application versions no longer needed client upgrades — which, among other things, allowed software as a service to bloom. The shift to web apps also democratised programming, fostering lightweight development using scripting languages.
5. Storage area networking. Pooled, block-addressable storage spread across multiple storage arrays connected via FibreChannel was a novel idea at the outset of this decade. SANs offered fast access to big storage, improved reliability and availability, and awesome scalability. Separating enterprise data and putting it on its own reliable high-speed network also made server failures far less critical.
Sorry, did I leave out the iPhone? Well, it's not an enterprise technology — yet. But there are many viable alternatives to choose from in building your own list. Take blade servers, for one. Or VoIP. Or network attached storage. Or ... Windows XP?
If the '80s was the decade of the PC, and the '90s was the decade of the internet, then the one thing the '00s lacked was a big, single, defining technology. Though you can't call it a technological advance, I think of the '00s as the decade of data. Thanks in part to Enron, we compulsively saved petabytes of the stuff. And there it sits, while at the same time, we have tons of cheap surplus computing power — spreading from underutilitsed CPUs in the datacentre to server farms in the cloud. With luck, the 2010s will be the decade in which we finally figure out how to put the two together on an unprecedented scale.
Like a low tide after a storm, an economic plunge can expose things you never noticed before. When the economy was barrelling along, we neither knew nor cared about $1400 trash cans or $87,000 rugs decorating certain executive offices. Now, after the implosion, such extravagances reveal that some people live their lives so far above the fray, they can't possibly know what's going on in the real world.
Likewise, the privations resulting from the collapse have exposed a yawning division in IT – between the CIOs who sign big contracts with big vendors and the IT people who live with the consequences. The gulf has never been wider. The high-level pitches spouted by vendors and swallowed by CIOs today seem to bear even less relevance to real-world IT than they normally do. Worse, with so many IT organisations hard pressed to keep the lights on, who can tolerate wasting resources on products and services that were bought as a result of some slick boardroom sell job?
Let's check out what some of the big vendors are saying.
IBM is in a class by itself with its stratospheric Smarter Planet campaign. You might think this has to do with green IT– and you would be right, but that is only part of it. IBM is talking about 21 different things, from healthcare, to retail, to education, to collaboration to ... well, you name it. And oh yes, cloud computing, too. It's the broadest tech marketing campaign I have ever witnessed, and if I am hearing "smarter planet" mentioned with every new IBM product announcement, so are CIOs. What possible relevance does it have to IT getting things done?
Flying several thousand metres closer to the ground is HP's Adaptive Infrastructure campaign. The notion here is that, through technology that fosters flexibility (mainly blade servers and virtualisation as far as I can tell) as well as more effective monitoring and management of the network, you can transform your infrastructure and vastly increase quality of service while reducing costs. Cisco also talks about transforming the enterprise through collaboration and network virtualisation – with premium Cisco products, presumably.
Such transformational pitches have been around since the idea of "re-engineering" was first floated in the 1980s. Transformation requires huge effort and resources. Are we any closer? Are CIOs really buying it?
We are in this funny phase in IT, in which Gartner tells us that everyone's favourite trend, cloud computing, is at the peak of the hype cycle and on the cusp of the "trough of disillusionment". How can anyone be disenchanted with something that has never been properly defined?
If you ask me, this isn't the time for grandiose ideas that are going to change IT forever. Sure, people should use cloud services when they make better sense than adding servers and/or licensing software locally. And new technologies like virtualisation and automated server provisioning have had a major, positive impact on IT.
But transformation takes time. And in most organisations, now is not the time for highfaluting initiatives, when IT people must leverage diminished resources against more pressing matters. Like how to handle the rampant growth of enterprise data or keep some irreplaceable legacy system up and running.
Management always wants IT to do more with less and in this economy that is not going to ease up any time soon. In exchange for even more sacrifice, the guys in the plush offices should learn what's really happening on the ground before they push ahead.
Am I the only one to notice that the two big trends of the day, cloud computing and mobile tech, seem to have so little to do with the core issues that concern IT professionals?
While the guys at Gartner and Forrester dream of other things, at InfoWorld we've given a name to the most pervasive underlying trend in all of IT: the enterprise data explosion.
You've heard the basic IDC stat, which sounds like a malign inversion of Moore's Law: Data doubles every 18 months. And the explosion shows no sign of abating. New compliance regulations in the wake of the global financial meltdown will likely mandate even more data retention, while the imperative to digitise healthcare records in the United States will prompt a fresh set of storage requirements. With the cost of disk space at an all-time low and the vagaries of compliance laws compelling businesses to "save everything" as a brute force method to reduce risk, enterprises are adding capacity at an astounding rate.
IDC analysts predict that unstructured data will grow at twice the rate of conventional structured data held in databases. By 2010, this "dark matter", so named due to the challenge of extracting useful information from raw data, will make up the majority of all enterprise data stored.
Most of that dark matter comes in the form of security, network and system event logs. Almost everything that happens in a business is recorded in a log file, making the search and analysis of that data an essential part of managing, securing, and auditing how a company's technology infrastructure is used. Logs are key to many forms of regulatory compliance (PCI, SOX, FISMA, HIPAA) and are a source business intelligence just waiting to be tapped — think web servers and CRM systems.
A number of tools now help IT search and analyse log files, including products from AlertLogic, ArcSight, LogLogic, LogRhythm, RSA Security, Sensage, and splunk. ArcSight and RSA also sell leading SEM (security event management) systems, which collect event log data across network and security devices, correlating network events in real time to identify security threats as they happen. SEM solutions collect vast amounts of event data and provide reporting tools for mining it.
Dark matter is only about half of all enterprise data stored. The structured stuff is ballooning, too: transaction records, email archives, rich media, near-line database backups, and on and on. We all know how low-cost storage systems and virtualisation are making it more economical to store this stuff. But managing and securing these huge volumes of data are becoming prohibitively difficult, and the cost of buying and maintaining new hardware without increased efficiencies cannot be sustained forever.
We are still years away from solutions that allow administrators to wrap their arms around the whole, heterogeneous storage mess and manage it from one monster control panel. Meanwhile, some interesting new options for easing the pain are emerging.
Most people have heard of one of them, thanks to the recent bidding war over Data Domain: data deduplication. Here, byte- or block-level data reduction techniques shrink the disk requirements (by as much as 80 percent or more) for backups, snapshots and even virtual server disk files, lowering overall data protection costs while at the same time making more data available on near-line storage.
Some of the new cloud solutions are interesting, too, the most prevalent of which are cloud-based hosting or backup/recovery solutions from the likes of SunGard or RackSpace. In addition, many of the first practical cloud-based applications have been built to store, manage, and process massive data sets, leveraging large clusters of commodity hardware and using programming frameworks (such as MapReduce and Hadoop) for reliable and scalable distributed computing.
These and other technologies can be marshaled to manage the explosive growth of data — and, in some cases, to extract new value from that data. But determining the best practices in each discipline and creating a grand strategy that drives toward an enterprise-wide solution isn't easy.
The PDC (Professional Developers Conference) is ground zero for Microsoft's big new ideas. This year's event, which kicks off today, promises to be the most momentous since 2003, when Microsoft began talking up the technologies underlying Longhorn — which eventually spawned Vista, Windows Server 2008, and a bunch of important web services protocols.
The failure of Vista, which Microsoft still stubbornly refuses to acknowledge, raises the stakes for this year's PDC. All eyes will be on Windows 7, which will be revealed behind closed doors to select few. But as InfoWorld's Randall C Kennedy has been saying since June, no one should expect Windows 7 to be a major departure from Vista — it will be more like Vista Second Edition, with similar system requirements. Big surprises seem unlikely.
So what's the main event? The unveiling of Microsoft's so-called cloud OS and the development tools and models for it.
[ Steve Ballmer said at a recent London conference that a new "Windows Cloud" operating system will be unveiled at PDC ]
For years Microsoft has struggled to clarify its cloud strategy: On the one hand, to compete with Google, it realises it must embrace a new world of cloud-based apps and services; on the other, it's terrified of cannibalising its desktop software business. The vague Microsoft answer to this conundrum has been "software plus services", where locally installed Microsoft software integrates with services or apps in the cloud.
For that idea to work, however, you need a platform on which to build and run those cloud services and apps, and that's exactly what Microsoft is poised to reveal. Earlier this month, Microsoft's director of the platform strategy group, Tim O'Brien, told me that PDC will "round out the picture at the OS and developer infrastructure level" for those who want to "participate in the transformation to services". In other words, developers are going to find out what Microsoft's new cloud platform really is and how to build on it.
Various code names for the layers of Microsoft's cloud have floated around, such as Red Dog (infrastructure?) and Zurich (runtime environment?), as well as several theories about the platform's structure. And a casual look at the PDC schedule provides some strong hints. Cloud sessions abound, from building a first cloud service to extending local .Net services into the cloud. And we already know pretty much about Microsoft's Live Mesh, which is both a development platform and a folder-sharing and synchronizing service for end-users.
For me, what's really exciting about all this is that when you throw in the forthcoming .Net 4.0 Framework and the new Oslo modelling language, you have all the elements necessary for creating and orchestrating services into composite applications that may include services in the cloud.
That's a software-plus-services strategy that makes sense to me. Over the long haul, it might even breathe new life into service-oriented architecture. But I'm getting ahead of myself. As usual, it's all in the execution, and if the past is any guide, we won't be able to evaluate how well Microsoft has done until months after PDC is over.
What separates enterprise applications from desktop apps? Mainly, business rules and workflow. At its recent Professional Developers Conference (PDC), Microsoft announced Windows Workflow Foundation (WWF), a new Windows technology that will enable developers to stitch together Microsoft Office apps and custom-built software into composite enterprise-class workflow applications. Scott Woodgate, Microsoft’s group product manager, says that with WWF Microsoft will be able to offer “the first workflow-enabled operating system.”
FRAMINGHAM (03/18/2004) - Coding is the easy part of creating almost any enterprise app. The hard part is modeling the business process: Purchase order A goes from point B to point C to be signed by party D so that the approved amount can be deducted from account E. But what if the PO never makes it to point C? What if account E is closed?
You would think ERP software had been packed up in a box and shoved in a dark corner somewhere. Analysts say it consumes roughly one quarter of the average enterprise software budget. Yet hardly anyone talks about ERP anymore — not since the big bang of the '90s, when most large enterprises spent many millions of dollars to roll out sprawling, complex ERP systems, in part to modernize for Y2K.
Web applications rule the enterprise. That's the indisputable conclusion to be drawn from this year's InfoWorld Programming Survey. Despite imperitives from Microsoft Corp. and others that developers abandon server-based HTML apps for fat desktop clients, the ease of "zero deployment" through the browser continues to win the day.
SAN FRANCISCO (09/26/2003) - Senior Java developer Stan Baranek is a member of the Java elite.
- Technical Analyst, Cybercrime Operations VIC
- Voice Engineer ACT
- Technical Analyst, Cybercrime Operations ACT
- Application Support HP ALM ACT
- TechnologyOne Systems Administrator WA
- TechnologyOne Systems Administrator VIC
- Network and Security Engineer Other
- Sharepoint Developer WA
- Portfolio Manager NSW
- IT Project Scheduler QLD
- Free Whitepaper! Learn how IT is evolving from producer to enabler, and fostering collaboration around analytics.
- Free Whitepaper! The 5 criteria to help you select the right analytics platform for your organization.
- Free Whitepaper! Learn how to create an analytics environment that is governed, scalable and self-serve.