4 big questions about green IT

If you're wondering why you should care or what you should do about sustainability in the datacentre, Robert L. Mitchell lays it all out

Green computing is a hot-button issue right now, but not all the ideas out there are practical for datacentres. "It's 90% hype," says Ben Stewart, senior vice president of facilities planning at Terremark Worldwide. He's dubious about solar and wind power, for example. But Stewart says 10% of the ideas are win-win: done right, certain green initiatives can increase energy efficiency, reduce carbon emissions and yield savings.

According to Steve Sams, vice president at IBM Global Technology Services, there's only one way to evaluate green energy options. "If I spent the money, where would I get the best return? That's the question to ask," says Sams. The key is knowing where to start. These four questions and answers can help you develop a plan.

1. Why should I care about having a green datacentre?

datacentre managers who have run out of power, cooling or space are already motivated to move to greener practices. But many others don't care because they put reliability and performance first — and they don't see the power bills, says Peter Gross, CEO at New York-based EYP Mission Critical Facilities. That's likely to change as electricity consumption continues to rise. "Our datacentres are a small fraction of our square footage but a huge percentage of our total energy bill," says Sams.

The cost of electricity over a three-year period now exceeds the acquisition cost of most servers, says Gross. "I don't know how anybody can ignore such an enormous cost. It is the second-largest operating cost in datacentres after labour," he says. Gross says that every CIO, facility manager and CEO he meets expresses concern about datacentre energy efficiency.

"My CEO is beating the drum about cutting power consumption," says John Engates, chief technology officer at San Antonio-based hosting company Rackspace. He says just 50% of power coming into the datacentre goes to the IT load. The rest is consumed by surrounding infrastructure, including power, cooling and lighting. "If you're using less power, you're spending less money. It's just good business," Engates says.

Returns on investment can be difficult to determine, however, because in most cases, the IT staff in a datacentre doesn't see the power bill. "The single most important step is to find ways to measure efficiency in your facility," says Gross. "You cannot control what you cannot measure."

One way to determine overall datacentre energy efficiency and provide a benchmark is to hire professionals to do an analysis. An inspection by IBM Global Technology Services costs US$50,000 to $70,000 (NZ$66,000 to $93,000) for a 30,000-square-foot datacentre, says Sams.

But just a one- or two-day engagement might get you most of the benefits for a lot less money, says Rakesh Kumar, an analyst at Gartner. "You can get 80% accuracy with a small investment in consultancy costs," he says. "That's good enough to make some judgments."

2. What steps can I take to increase the efficiency of my datacentre's IT equipment?

The biggest savings come from server consolidation using virtualisation technology. Not only does this remove equipment from service, but it also helps raise server utilisation rates from the typical 10% to 15% load today, increasing energy efficiency.

Consolidating onto new servers brings an additional benefit. Power-supply efficiencies for servers purchased more than 12 months ago typically range from 55% to 85% , says Gross. That means 15% to 45% of incoming power is wasted before it hits the IT load. Newer servers operate at 92% or 93% efficiency, and most don't drop below 80%, even at lower utilisation levels.

Using virtualisation, Affordable Internet Services Online, based in California, consolidated 120 servers onto four IBM xSeries servers. "Now we don't have the power use and cooling needs we had before," says CTO and co-founder Phil Nail.

Using networked storage can also keep energy costs in check. Direct-attached storage devices use 10 to 13 watts per disk. In an IBM BladeCenter, for example, 56 blades can use 112 disk drives that consume about 1.2 kilowatts of power. Those can be replaced with a single 12-disk Serial Attached SCSI storage array that uses less than 300 watts, says Scott Tease, BladeCentre product manager.

IT managers should demand more energy-efficient designs for all data centrre equipment, says Engates. He says his company standardised on Brocade switches in part because of their energy efficiency and "environmental friendliness".

3. How can I get more out of my data centre's cooling and mechanical systems?

Getting back to basics is key, says Dave Kelley, manager of application engineering at Ohio-based Liebert Precision Cooling, a division of Emerson Network Power. "You have to go back and look at a lot of the things that you didn't worry about 10 years ago."

The biggest potential savings come from airflow optimisation. For every kilowatt of load, each rack in a datacentre requires 100 to 125 cubic feet of cool air per minute. Airflow blockages under the floor or air leaks in the racks can cause substantial losses, says Kelley. The typical response to such problems has been to increase the air conditioning temperature — and that's a big energy-waster.

Simple steps such as implementing hot-aisle/cold-aisle designs, sealing off cable cutouts, inserting blanking plates and clearing underfloor obstructions make a big difference. With greater airflow efficiency, air conditioning output temperatures can be raised.

After performing a computerised airflow analysis of its datacentres, San Francisco-based Wells Fargo did exactly that. "In many datacentres, you can hang meat in there, they're so cold. With computerised control and better humidification systems, we've raised the set point of our datacentres so we're not overcooling them," says Bob Culver, senior vice president and manager of facilities for Wells Fargo's technology information group.

At Pacific Gas and Electric (PG&E), cable races under the floor were blocking 80% of the airflow. The utility expects to save 15% to 20% in energy costs by rewiring under the floor, redesigning the return-air plenum and carefully choosing and placing perforated tiles in the cold aisles. Choosing the right perforated tile — a seemingly small consideration — can actually make a big difference. "There are better tiles out there that will give you more efficient distribution of cool air," says Jose Argenal, PG&E's datacentre manager. The changes also allowed PG&E to avoid adding chillers, pumps and piping — and piping is a potential problem in its older, basement-level datacentre.

datacentre managers can also optimise air conditioning systems by using variable-speed fans, says Ken Baker, datacentre infrastructure technologist at Hewlett-Packard. "AC runs at 100% duty cycle all the time, and the fans have one speed: on," he says. HP's Dynamic Smart Cooling initiative uses rack-mounted temperature sensors, and variable-speed fans allow the power consumption of air conditioning units to vary with the IT equipment load. Intelligent control circuitry manages both fan speed and temperature settings on air conditioners.

It's relatively easy to retrofit existing fans, Baker says, and the approach has two major benefits. One is that cutting fan speed dramatically reduces energy use. A 10-horsepower fan uses 7,500 watts of power at full speed but just 1,000 watts at half speed, he says. The increased efficiency also allows the temperature of the cool air supply to be automatically raised from the typical 55 degrees Fahrenheit to between 68 and 70 degrees, he says.

"The biggest low-hanging fruit is just turning the thermostat up," Baker says. People keep the temperature set too low because they fear that the equipment will overheat after a power interruption before the air conditioning system can get the room temperature back under control. "The truth is that the temperature won't rise that rapidly," Baker says.

Managers of datacentres located in colder locales can also save money by designing air conditioning systems that use economisers that take advantage of outside air to cool their facilities during the winter. Wells Fargo implemented such a system in its Minneapolis datacentre. That technology makes the most sense when designing new datacentres.

4. Are there changes I can make to my power distribution system that will increase efficiency and save money?

datacentres use many uninterruptible power supplies. In fact, when it comes to energy consumption, UPSs are second only to air conditioning systems among components of the datacentre infrastructure, and they represent one of the biggest areas for potential savings, says Sams. While servers tend to be refreshed every three or four years, datacentre UPS equipment tends to be much older. The units are often oversized for the load and were never designed to operate efficiently when running at low utilisation rates. While older units might run at 70% efficiency at low utilisation levels, newer UPSs run at 93% to 97% efficiency even at low utilisation levels, Sams says.

Rather than buying traditional UPSs, Terremark Worldwide went with greener technology. It replaced all of its battery-backed UPSs in its Miami datacentre with rotary UPSs. These use a spinning flywheel to deliver transitional power during the time interval between when power is lost and when generators come online. Stewart says flywheels aren't necessarily more energy-efficient than modern battery-backed UPSs, and the units can be heavy. But they take up less floor space and are greener because there are no lead-acid batteries to dispose of. Today, Terremark's Miami datacentre fits 6 megawatts of generators and UPS equipment into a 2,000-square-foot room. "To do that with a static UPS, you'd need three to five times the space just for the batteries," Stewart says.

Efficiencies can also be gained in the power distribution system. Most datacentres step voltage down several times, from 480 to 208 volts and then to 120 volts. Kelley says you can reduce conversion losses by bringing 480 volts directly to the racks and stepping it down from there. Stewart says he is considering moving Terremark's system to higher European-standard voltage for the same reason. Most IT equipment already supports a 240-volt feed. He expects to see a 4% efficiency gain. "Our power bill is $400,000 a month, so that adds up pretty quickly," Stewart says.

The best green options will vary with the configuration of each datacentre. The key to success is to focus on the big picture when assessing overall power and cooling needs, says Gross. "Know what you have, benchmark it, figure out where the low-hanging fruit is, and start one element at a time," he says.

Join the newsletter!

Error: Please check your email address.

Tags datacentresSpecial IDsustainability

Show Comments