Stories by David Linthicum

Opinion: iCloud's big implications for cloud computing

Apple's iCloud is out. Like many of you, I spent part of the day playing around with the new features such as applications, data, and content syncing. I even purchased 20GB of cloud storage and sent a few items back and forth among my iPad, Mac, and iPhone. It's a good upgrade, but not a great one.
That's a personal take. What about the implications for IT? As you might expect, the phone began to ring soon after iCloud's debut last week. What does a "cloud guy" think about iCloud, and how will it benefit or hurt cloud computing itself going forward? There are two implications to consider.
First, it's cloud for the retail market. Many people who don't work in enterprise IT will begin to see some advantages of cloud computing. The use of cloud storage is one. Storage-as-a-service providers such as Mozy and Dropbox have been around for years. But iCloud is already linked to your iTunes account and is native to Apple's mobile productivity apps, so it'll get broader adoption as a result. Now, when I mention storage as a service, more people will know what I'm talking about.
Second, there is the potential for privacy issues that will be blamed on cloud computing. New applications in iCloud such as Photo Stream and Find My Friends have the potential for big privacy issues, such as tracking people who don't want to be tracked, or uploading embarrassing photos by mistake. All of these will be user errors, trust me. But it won't matter, as the press will blame it on iCloud, aka "the cloud." Count on seeing iStalking or cloudstalking as a new meme at some point.
Those are the good and bad aspects of iCloud. However, it's a progressive step toward acceptance that information stored on some server at some unknown location is not a bad thing. However, as with any new technology, you have to understand the advantages and disadvantages. Cloud computing and iCloud are no exceptions.

Opinion: The failure behind the Amazon outage isn't just Amazon's

When Amazon.com's outage last week - specifically, the failure of its EBS (elastic block storage) subsystem - left popular websites and services such as Reddit, Foursquare, and Hootsuite crippled or outright disabled, the blogosphere blew up with noise around the risks of using the cloud. Although a few defenders spoke up, most of these instant experts panned the cloud and Amazon.com. The story was huge, covered by the New York Times and the national business press; Amazon.com is now "enjoying" the same limelight that fell on Microsoft in the 1990s. It will be watched carefully for any weakness and rapidly kicked when issues occur.
It's the same situation we've seen since we began to use computers: They are not perfect, and from time to time, hardware and software fails in such a way that outages occur. Most cloud providers, including Amazon.com, have spent a lot of time and money to create advanced multitenant architectures and advanced infrastructures to reduce the number and severity of outages. But to think that all potential problems are eliminated is just being naive.
Some of the blame around the outage has to go to those who made Amazon.com a single point of failure for their organizations. You have to plan and create architectures that can work around the loss of major components to protect your own services, as well as make sure you live up to your own SLA requirements.
Although this incident does indeed show weakness in the Amazon.com cloud, it also highlights liabilities in those who've become overly dependent on Amazon.com. The affected companies need to create solutions that can fail over to a secondary cloud or locally hosted system - or they will again risk a single outage taking down their core moneymaking machines. I suspect the losses around this outage will easily track into the millions of dollars.
Never trust a single system component, be it a cloud, a network, a router, a database, or whatever. Figure out what to do when a component goes offline or fails in other ways. The typical solution is to fail to secondary components that can operate until the primary is back online. That used to be a given in IT. Unfortunately, many organisations have put too much trust into clouds, pushing their systems out to providers with the incorrect thought that a third party will provide the resiliency and the redundancy they require.
As we've seen so dramatically, clouds have limitations, too. Don't get mad at that fact - just deal with it.

Opinion: IEEE's cloud portability project: A fool's errand?

The IEEE, the international standards-making organisation, is jumping with both feet into the cloud computing space and announcing the launch of its new Cloud Computing Initiative. The IEEE is trying to create two standards for how cloud applications and services would interact and be portable across clouds.
The two standards are IEEE P2301, Draft Guide for Cloud Portability and Interoperability Profiles, and IEEE P2302, Draft Standard for Inter-cloud Interoperability and Federation.
The goal of IEEE P2301 is to provide a road map for cloud vendors, service providers, and other key cloud providers for use in their cloud environments. If IEEE P2301 does what it promises and is adopted, the IEEE says it would aid users in procuring, developing, building, and using standards-based cloud computing products and services, with the objective of enabling better portability, increased commonality, and interoperability.
The goal of IEEE P2302 is to define the topology, protocols, functionality, and governance required to support cloud-to-cloud interoperability.
Don't expect anything to happen any time soon. The standards process typically takes years and years. Even the first step has yet to occur for these standards: the formation of their working groups. However, IEEE is good at defining the details behind standards, as evidenced by its widely used platform and communication standards. By contrast, most of the standards that emerge from organizations other than the IEEE are just glorified white papers — not enough detail to be useful.
The cloud industry has already been working toward interoperability, as have some other standards organizations. But none of those efforts has exactly set the cloud computing world on fire. I like the fact that the IEEE is making this effort, versus other standards organizations whose motivations are more about undercover marketing efforts than unbiased guidelines to aid users.
But reality gets in the way, and I have my doubts that anything useful will come out of the IEEE efforts in any reasonable timeframe. The other standards groups involved in cloud computing have found that many of the cloud providers are more concerned with driving into a quickly emerging market and being purchased for high multiples than about using standards.
I suspect that most major cloud providers will send reps to IEEE working groups. But as we've seen countless times in other standards efforts (think of the tortured histories of HTML [4] and 802.11), it's a long journey from kicking off an effort versus having vendors, service providers, and even users defining and adopting useful standards.
In many respects, the use of standards is counterproductive to achieving market penetration. I mean, why support a standard that makes it easy for users to move off your cloud platform? Or to support a standard that allows your client to communicate with your competitor's cloud? Fat chance of that being accepted, at least in the short term.
If these interoperability and portablility standards are going to take root, there has to be a grassroots movement from the cloud user community to demand that these guidelines be followed. Right now, users don't seem to be thinking about that, and it may be a while before they ask the tough questions around interoperability and portability.
It's just a reality check, guys. Don't kill the messenger.

[]