Managing the internet's complexity
- 24 June, 2001 22:00
On the face of things this would appear to be something of a paradox, but in reality it makes perfect sense. The more distributed something is, the harder it is to manage. And in the case of the internet, managing things is exceedingly complex.
If you look within almost any given company’s IT organisation, you’ll find an outfit that races around every day just trying to keep the network stable. And whereas IT folks may spend less time on the physical management of PC systems, the software that powers the network now includes proxy servers, firewalls and directories where once there was only a network operating system. Of course, each of these network applications comes bundled with a host of incompatibility issues that require management.
The existing crisis is only going to be exacerbated by the next wave of collaborative e-commerce applications being trumpeted by companies such as Oracle, PeopleSoft and JD Edwards. Because whereas the emergence of IP may have streamlined the number of protocols on the network, the level of integration magic needed to make everything work has increased tenfold.
This level of integration typically requires superhuman efforts within a company and is nearly impossible to achieve across multiple companies trying to create the equivalent of a digital enterprise platform. This is not -because it’s impossible to connect the applications, but rather once they are connected, it takes boatloads of people to manage them.
Truth be told, the one area that has failed to keep pace with innovation on the internet has been network management. At one point network management tools based on agents running everywhere on the network were going to create frameworks that would rescue us from the ills of client-server computing.
But once the internet became the platform, agent-based solutions were no longer practical because you couldn’t scale the system enough to put agents everywhere on the internet.
As a result, many companies today have fantastic consoles that show maps of the network. But very little network management actually takes place, beyond telling people what router and server is actually down. Determining the reason that the server or the router is down still requires a visit from a human with a cognitive brain.
The management conundrum associated with collaborative e-commerce applications is actually leading some organisations to rethink how they approach enterprise computing. Rather than trying to link various data centres via the internet to create a virtual entity, would it not be simpler to build one data centre that is shared by everybody in the network?
This means, for example, that financial services organisations could come together in one place to integrate applications at a facility that was managed by one team of people. This probably would be significantly more efficient than hiring 10 teams of people to manage 10 separate data centres, and then sitting back and watching those 10 groups of people trying to remotely integrate disparate systems.
It is unlikely that we’ll see a massive wave of data centre consolidation, but as we go forward some sort of hub and spoke model starts to make a lot more sense. After all, being on the edge of something is only defined as it relates to where the centre is. So that means that at the core of a network could be a shared data centre that is augmented by distributed data centres that in turn support all the data and logic that resides on the edge of the network.
So give centralised management a chance. After all, it’s a whole lot easier to go and ask somebody down the hall how to resolve a problem than it is to get somebody on the phone or wait for them to receive and respond to an email.
Vizard is editor-in-chief of InfoWorld. Send email to Michael Vizard.