Deeper rabbit holes will result from virtualisation

Don't virtually dig too deep, says Tom Yager

Virtualisation is designed to render differences between systems irrelevant, but this is a good-news/bad-news arrangement. The good news is that ICT can treat every server as an x86, with all x86 systems being standard. The bad news relates to the new difficulties we face in diagnosing and treating serious but non-fatal illness when virtualisation covers the source of the problem.

Virtual machine managers (VMMs) and tools do an ideal job of notifying administrators of both catastrophic failures and the self-healing steps the VMMs take in response to issues such as resource shortages. But as I keep warning, we’re enjoying the final days of the era of simple virtualisation. Soon we’ll be entangled in the knotted nomenclature of host/guest, privileged user and hardware-assisted system virtualisation on servers with multiple multithreaded multicore CPUs.

Once we graft storage and network virtualisation onto that terminological tree, each ICT organisation will have to rely on software — or consultants, which are harder to carry around — to help it map and navigate through its peculiar maze of physical and virtual entities and pathways.

There are two main rabbit holes that many implementers of growing virtualised enterprises haven’t uncovered: the difficulty of identifying and addressing critical problems that manifest as something short of a full failure, and the fact that blind trust in a virtualisation solution’s tendency to do the right thing guarantees reduced return on investment.

Virtualisation makes wider distribution of previously tightly clustered solutions irresistible. It’s obvious that this alone doesn’t translate to better performance, and when multiple virtual machines reside on the same host and publish the same services, it adds as much complexity as protection.

When metrics such as latency, connection capacity, and dropped connections indicate a problem in one virtual machine but not in others that occupy the same host, troubleshooting gets painful quickly. An example is guest OSes that aren’t aware that the hardware they see doesn’t actually exist. The virtual LANs, host bus adapters, volumes, memory and CPUs that look local to the guest are mirages created by the VMM. If the majority of guests are happy living in this unreality but one or a few go into decline, the diagnostics and workarounds that experienced administrators apply to the one system, one OS model cease working.

As their use of virtualisation grows in scale and purpose, users need to match their virtualisation solution and its specific deployment to the capabilities of their hardware. System virtualisation does paint over the physical differences among systems, but relying on that convenience will reduce the investment return.

Look at the aggressive roadmap of highly relevant technology advances that AMD has made, and will make through the end of 2006, and ask whether your virtualisation and hardware purchasing strategy specifically takes them into account. It likely does not, and if so, your operation’s return on its system virtualisation investment might be short by a third of its potential.

Virtualisation adds to the skills that your ICT staff must possess, because the headache-inducing problems that take ICT staff and developers down a rabbit hole now will take them down deeper, more complicated ones when virtualisation becomes the de facto means of placing new capacity online.

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags technology

More about AMD

Show Comments