While blade servers can offer tremendous benefits for the datacentre, early adopters warn those thinking of implementing them to plan very carefully.
“The impact on facilities wasn’t considered when blades first came out, so you have to do some serious capacity planning and architecture development before deploying them,” says Brian Smith, datacentre manager at The Cerner Corporation, a US healthcare software vendor.
Blades are self-contained servers that support high-density computing. Unlike their standalone predecessors, they share components, such as a monitor, with other blades to ease management, allow for organised cabling and smaller server footprints in the datacentre.
Cerner has been working with blade servers in its seven datacentres for the past three years and has almost 1,200 in use today.
Smith says he has learned firsthand the promise and perils of the technology. On the upside, blade servers allow companies to consolidate their operations and employ advanced management tools such as virtualisation. On the other hand, they are notorious energy drains that wreak havoc on datacentres’ power and cooling resources. “Datacentres can cook if they aren’t prepared for the high density,” Smith says.
Jeff Stein, director of professional services at InteleNet Communications, agrees. “The typical power requirement for a standard server is 120–volt power. The typical requirement for a blade is 208-volt power. Some facilities just can’t offer that,” he says.
InteleNet, a managed service provider, has 500 blade servers split between its main facility in Irvine, California, which it owns, and another facility in Phoenix, Arizona. Stein just completed “a significant power expansion project” to support the blades. “In Irvine, the original construction and electrical designs for the facility were able to deliver a certain number of watts per square foot on average. Recent hardware developments, such as the blade servers, have forced us to enhance the infrastructure of this datacentre to support the increasing electrical and cooling requirements,” he says.
He admits the team ran into challenges when they first deployed the blades almost two years ago. “We run a datacentre, deal with lots of power requirements and we still made an error when we bought our first chassis,” he says.
The problem was that the team bought power distribution units and cabling that were much larger than anticipated. “This limited what additional equipment could be installed effectively in the same cabinet with the blades. We made sure to take note so that we never make that mistake again,” he says.
Another common mistake that datacentre teams face when dealing with blade servers is space allocation, he says.
“You have this perception that because the blade servers are smaller and vertically mounted, you’ll be able to put more in a rack. That’s not always true.”
Stein says traditional server chassis hold one horizontal-mounted server per rack unit. With blades, the chassis tend to be seven or nine rack units and deliver 14 independent blades. However, this higher server density also brings related increases in power and cooling requirements.
Andi Mann, analyst at Enterprise Management Associates, agrees that blade servers can be deceiving. “You can’t rack up two or three next to each other; sometimes you can’t even fill up a whole rack,” he says.He encourages datacentre teams to plot out their equipment needs. “You need tools to help you understand your hot spots and where you need to run power. Remember, you’re co-locating a lot more power drain into a single circuit, and you need to ensure you aren’t overloading the system.”
He suggests pre-empting problems by implementing dedicated power and space planning programs such as Visual Network Design’s Rackwise and Aperture’s Vista. Hewlett-Packard’s Insight Power Manager also tracks ongoing consumption.
John Rowell, chief technology officer at OpSource, says not planning ahead of time leads to cost issues down the road. “For larger server deployments, you really have to become a power expert, otherwise you’ll get burned on costs,” he says.
OpSource, a software-as-a-service provider, expanded its datacentres during 2005 and 2006, increasing the pool of blade servers to more than 850 and sending the power demand through the roof. Coupled with the rising price of power during that time, he says costs increased by more than two and a half times compared with what they started with.
IT departments should keep this situation in mind, especially if they engage in chargeback or other budgeting practices that require user departments to pay for the IT resources they consume.
Rowell says there are two primary drivers for a move to blades: the number of servers required to support today’s applications, and the increase in CPU and memory needed to support those applications. “Faster processsors and larger memory chips that come in these servers need more power to run. This combination has created a multiplier effect on the power requirements of datacentre deployments,” he says.
To ensure that they are on target when purchasing equipment, Rowell says his team uses software tools to do a CPU/memory to watts analysis. “It typically requires three times the server CPU/memory capabilities to run an application today than was required in 2001,” he says.
Cerner’s Smith says there are other considerations with blades, too, such as rack size. “Depending on how many chassis you put in a rack, they are getting taller. If you don’t plan for it, the doors into the rooms might not be tall enough. We’ve had to replace some doors,” he says. The height also poses a problem for cabling. “We do our cable management overhead to make sure we have enough room,” he says.
There are some band-aid measures that can be put in place to ease blade servers’ power and cooling burden on the datacentre. “You can leave blank floor tiles around the racks to get cold air in; you can get a back door that sends heat out of the room; and you can bring water into the datacentre to cool it. There are lots of work-arounds,” Smith says.
But he warns, “All that can add up. So the pluses of using blade servers can get outweighed by the cost of dealing with the high power and cooling needs.”
Although some IT staff are quick to point out the costs and other issues inherent with blade servers, they are equally adamant about never going back to stand-alone servers.
Rowell says he wouldn’t give up the strong management tools for his Linux and Microsoft environment. “One of the primary reasons we went with blades is for the virtualisation tools,” he says. His team has put multiple instances of software across a host of blades so that when a customer has an event, such as the launch of a new product, they can easily ramp up server capacity to support the traffic surge.
Cerner’s Smith says the key to balancing the pros and cons of the blade servers is to stay on top of your datacentre needs and not be caught off guard. One way he does this: “Our IT team meets with the facilities team every week to make sure everything is running smoothly. We have a list we run through — are we running out of power, space, cooling?” he says.
For InteleNet’s Stein, blade servers have been a godsend. “It’s worth it for us to make any modifications for our blade servers, because we don’t have the headaches we used to have such as unracking servers and tearing them apart to reconfigure them. All we have to do is take out a blade, upgrade it and stick it back in”.