Name: Amory B Lovins
Title: Chairman and chief scientist
Organisation: Rocky Mountain Institute.
Favourite (nonwork) pastimes: Mountain and landscape photography. In the last two years, I switched from film to digital. It’s saving silver. I’m merely inconveniencing electrons.
Role model: Besides my parents, Edwin Land, the founder of Polaroid, and David R Brower, the greatest conservationist of the 20th century. I worked with both.
Philosophy: Try to help keep it simple, and keep learning.
Favourite vices: Procrastination. And dark chocolate.
Favourite technology: The combination of computers, the internet and Google. As far as we know, there’s nothing more powerful in the universe than six billion minds wrapping around a problem.
How concerned should companies be about the energy efficiency of computers and datacentres?
Most big financial firms have US$1 billion-plus (NZ$1.36 billion plus) IT capital budgets. Often, three-quarters or more of that budget is not for the IT equipment but for support equipment such as power supply and cooling and air handling. Many datacentres now pay more for electricity than for the IT hardware. These support assets and services are eating the IT budget, and yet most of them are wasted. By eliminating that waste, we could multiply several-fold the cost-effectiveness of the IT function.
How much is a watt of energy saved worth to a business?
It’s worth in the order of around US$20 to US$27 present value to get a watt out of the datacentre. So it’s really a false economy to buy a server that looks cheap but costs more in electricity.
How much does it save upstream at the power plant?
A kilowatt-hour of electricity from a coal-fired power plant burns about a pound of coal. That ends up as carbon dioxide and climate change. If you save a unit of electricity at the metre, you’re saving roughly four times that much coal from being burned at the power plant. Saving one watt would, over 20 years, keep up to two tons of coal from being burned.
What is the biggest mistake regarding IT energy efficiency that you see in datacentres?Not paying attention to designing your IT systems to be highly efficient in the first place. This comes about through stovepiped corporate functions. The people who make the IT strategy and set the requirements for the equipment and those who run the equipment and run the facilities have to be around the same table.
It is reinforced by a tendency to think of costs and for datacentre landlords to charge tenants in terms of square feet, whereas most of the costs are actually incurred by watts and not by area. That immature real-estate model causes densification that greatly complicates cooling and increases capital operating costs.
What change would give IT the biggest bang for the buck?
Once you buy extremely efficient servers, if you’re using conventional air-based cooling, at least meticulously implement hot and cold aisles. That will save half to three quarters of your cooling energy right off the bat, and it’s very cheap to do.
In 2003, the Rocky Mountain Institute’s datacentre design workshop developed recommendations to cut datacentre power needs by 89%. Are vendors implementing those recommendations?
Many of the recommendations are starting to emerge, and some are even better than we thought from vendors: the chip efficiency, early liquid-cooling systems, convectively cooled blades, superefficient power supplies, and increasing experience with DC power supplies, and DC [power distribution], which is common in Japan.
But we don’t see these things in datacentres yet?
We’re seeing piecemeal hardware solutions coming along nicely. What is so far missing, at least in the US datacentres, is comprehensive integration of these concepts.
The next big step will be when one or more major operators puts all these parts together to realise the nine-fold or greater savings that we outlined. In fact, I now think we can do even better, because both the IT and the support equipment are proving to be more efficient than we thought possible.
You advocate using, of all things, slush to cool datacentres. Can you explain that?
We recently did a design for a high-tech facility in a temperate climate that was originally going to have over 20,000 tons of chillers, and by the time we got through, the number was zero.
We found we could meet about 70% of the load with the coolness or dryness of the outside air using either air-side or water-side economisers, depending on the time of year. The rest [came from] a mountain of slush sprayed out of snow-making machines into a hole in the ground on a few cold nights and used to provide 32-degree Fahrenheit (0-degree Centigrade) meltwater all year.
Wouldn’t that take a lot of slush?
In this case, it was acres. But that’s to replace over 20,000 tons of chillers. You get over 100 units of cooling per unit of electricity this way, and you need only about 500 sub-freezing hours a year [to make the slush]. It’s a highly reliable technology [but] obviously not suitable for urban sites.
The most interesting way to do this would be to [generate] power onsite as well and run your cooling off the waste heat of the onsite power plant. And if that [power plant] is also a fuel cell, which is the most reliable power source we know, then you would get rid of your uninterruptible power supplies and all of the costs and losses.
In computer rooms, air-cooling system fans take up a significant amount of power. You have proposed redesigning fans based on designs found in nature, such as the Fibonacci spiral. Why are such designs more efficient and, if they are, why aren’t engineers designing them into products?
Because they have less friction. Friction is by definition waste. Engineers for millennia have been designing out friction, but in the last century or so, our engineering got specialised and stovepiped. The design process that used to optimise a whole system for multiple benefits got sliced into pieces, each with one specialist designing one component or optimising a component for a single benefit.
When the integration was lost — when the design process got disintegrated — we were less able to see how an integrated design could eliminate noticeable losses.
Moreover, our engineers started getting paid for what they spent rather than what they saved, because their fee is traditionally negotiated downwards from some percentage of the cost of what they design. So they have a perverse incentive to put in more equipment and energy and no reward for designing smarter.
What’s the bottom-line benefit to datacentres if all of these changes come to pass?
A super-efficient datacentre will have a lower capital cost because so much of the power and cooling equipment will go away. We’ll get back to being able to spend most of our capital budget on the IT equipment rather than on the stuff to make it run.
Where can readers learn more?
If people would like a general introduction to this way of thinking, they’ll find quite a few examples in a book called Natural Capitalism [Back Bay Books, 2000]. Both that and a Harvard
Business Review summary of it are free at Natcap.org.