Datacentre operators will go to almost any lengths to avoid an overheated server. Case in point: a financial institution in London suffered a power shortage during a very hot day one recent summer, and was left with no ability to cool its servers and storage.
"They went to the extent of having the fire department come and hose down the outside of the building [to cool it down]," says IBM's Rich Lechner, who is leading Big Green, an IBM datacentre efficiency project.
That's certainly an unusual step, and probably something to be avoided, but datacentre managers throughout the world are missing opportunities to use less power, Lechner and other speakers said during a panel discussion in December on datacentre cooling hosted by the Mass Technology Leadership Council (MTLC), a non-profit industry group in the US.
Datacentre energy consumption as a percentage of total United States electricity use has doubled since 2000, and datacentres and servers will double their energy consumption again — to 100 billion kilowatt-hours — by 2012, according to the US Environmental Protection Agency.
"It is our number one expense. I pay more for electricity than I do for rent," said Wayne Sawchuk, CEO and co-founder of ColoSpace, which provides co-location services in six datacentres comprising more than 4,000 servers in Massachusetts and New Hampshire. "I have a tremendous desire to reduce our electric bill every month," he said at the panel discussion.
IBM is redesigning its own datacentres as it attempts to double its computing capacity within three years without increasing energy consumption. Big Blue is also focusing on customers, and has announced x86-based systems that "will essentially require zero air conditioning," Lechner said. IBM is using liquid cooling and putting thermal sensors inside servers, allowing fans inside the server to move at different speeds depending on need, he said.
Panelists discussed many datacentre efficiency dos and don'ts during the two-hour session. As most people know, virtualisation of storage and servers can dramatically increase utilisation rates, maximising the capacities of each piece of hardware.
"One of the worst things we see in IT is [people saying] 'we have a new application, let's go buy a new server,'" said Thomas Humphrey, senior business development manager for APC-MGE, a power and cooling services vendor.
A smart user of virtualisation technology constantly monitors server utilisation rates, sometimes moving workloads off a little-utilised server so it can be shut down for the rest of the day, Lechner said.
Tactics to improve efficiency include hot and cold aisle design, and putting the heaviest servers at the top of the cabinet (since heat rises and heavy servers are usually the hottest). Even if you're not building a new datacentre or undergoing a major redesign, there are numerous simple ways to improve efficiency. Make sure air conditioners are running efficiently, don't keep the temperature at 12 degrees when 20 is cool enough, and look under floor boards — you might have a "rat's nest of cabling that's impeding air flow," Lechner said.
ColoSpace's Sawchuk recommends using a product like Enviro Watch, which keeps track of power consumption for each server.
"If all else fails, when the datacentre is too hot, call the fire department," joked moderator David Kopans of Fat Spaniel Technologies, a renewable energy services company.
The designs of green datacentres and buildings can impact employee work environments, but panelists said people tend to embrace these initiatives if they're educated about the environmental and cost benefits.
"We're finding this is very much a grassroots thing. There's strong interest from employees," Lechner said. "Amongst new college hires, once you get past salary and health benefits, the number one criteria for them in selecting a workplace is how green the company is."