By consolidating a lot of computing resources into a few data centres, IT organisations favouring the centralised approach have benefited from a lower cost of ownership, given that centralised architectures require fewer IT staff for support than do distributed computing architectures. As a result, distributed computing, even during the heyday of the internet, failed to become the dominant computing architecture because it is too expensive and difficult to manage hundreds or even thousands of servers flung across the enterprise.
In the wake of recent events, however, the conventional wisdom about what constitutes an expense will change. Centralised operations in today's global environment is a liability. Companies with centralised computing architectures affected by the attacks in New York on September 11 are having a harder time recovering compared to companies with distributed architectures that have allowed them to more easily move business functions to other locations.
In theory, companies that build some form of distributed computing architecture will carry a higher cost of doing business compared to companies that rely primarily on massive data centres.
But in the event of a catastrophic event caused by a prolonged state of war, that cost now seems minimal compared to the amount of time it would take to recover from an attack that took out your computing resources' physical location.
This may seem like a crass discussion given the enormity of recent events, but the fact is executives will begin to ask a lot of questions about what happens to the business in the event of another series of attacks. And telling them that it will take a week or more to recover will not to be an acceptable answer.
At the same time, many of those executives will take a harder look at distributing their own business operations to make sure that major elements of the business are not all concentrated in one single location.
All of this tends to favour distributed computing models, which as technology evolves are becoming less expensive to implement and manage. They will probably still be more expensive than centralised computing models over the long haul, but the difference in cost between the two models continues to get smaller. That cost difference, in effect, would be akin to an insurance premium a business must pay to make sure its operations recover as quickly as possible after a catastrophic event.
Of course the real answer to the debate over centralised v distributed computing models is to combine the best of both. In an ideal world, the data and the people using it should be distributed, whereas the tools needed to manage that data should be centralised. Progress is being made on this front, but we still have a long way to go before management tools really live up to that expectation.
Hopefully vendors in the distributed data, systems and network management space will rise to this challenge in the months to come.