« Back to Home Page

Sign up for the 3p daily dispatch:

Datacenters Are Running Out Of Energy, Fast.

| Tuesday December 16th, 2008 | 2 Comments

datacenter.jpgA recent report has revealed that data centers will face critical shortages of energy in less than three years. By 2011, two thirds of all datacenters won’t have enough electricity to perform critical computing tasks.
The survey, carried out by Emerson Network Power is based on interviews with datacenter professionals. 64% of the 167 respondents estimate that they are going to run out of data capacity by 2011 because they will be faced with energy shortages by then.
The situation in Europe is no better than in the US. IDC recently issued a report that had similar findings, indicating that European data centers will land in an energy crisis in the near future too. The main reason being that energy consumption increased by more than 13% between 2006 and 2007. The report predicted that energy consumption by data center facilities would reach more than 42 terawatt hours in 2008.


A program director for European Enterprise Servers, Nathaniel Martinez was quoted as saying that it’s likely that every euro spent on new server capacity in 2012 requires an additional 80 cents to power infrastructure.
Datacenter logistics are this decade’s most important inevitable catch-22. The more servers are running, the more power is consumed, which generates more heat that needs cooling, which again requires power consumption and so on.
Datacenter officials are beginning to point out that there’s an end to the party if nothing changes. Companies will have to re-think their energy strategies because power conservation will have to be achieved.
If you think this sounds a tad urgent, think some more; many of Emerson’s respondents say the problems they are facing are already acute because they have reached the limit to their data capacity. Why? Energy.
Other respondents however report they’re coping with the situation as best as they can. Some 25% of the people participating in the research say their company has plans in place aimed at cutting energy usage. Yet this percentage is not nearly enough to significantly mitigate the risk involved. The number of servers needed by most businesses grows at a much faster pace.
A major additional factor to the problem is that the power density per rack is increasing too. “Because of the increased deployment of blade servers, switches, and other powerful rack-based equipment, the overall power density of the racks that housed them also increased. The room average per rack for respondents was 8kW, which is up from 6kW reported in a spring 2006 [..] survey,” according to the Emerson report.
The way most companies go about their datacenter capacity challenge is to set aside funds for making capital investments in new infrastructure. A total of 47% of the respondents said they are planning innovations or expansions in the next years. But 38% said they’re building entirely new datacenters. And another 30% percent said they will consolidate existing datacenter operations and premises.
The imperative for datacenters to reduce energy usage is rather compelling because global electricity prices skyrocketed between 2002 and 2006, when price hikes of some 56 percent were seen. During these years it also became clear that insufficient power would soon negatively impact datacenter performance.
The Emerson report shows the reason so few datacenters are planning cuts in their power consumption is because they prioritize high availability of their services.
HP recently announced new technology to reduce datacenter energy usage. The technology is called the HP Dynamic Power Capping system and it functions as a power cap which assists customers using server power by reallocating the energy and cooling resources. The Dynamic Power Capping has been added to the HP Thermal Logic portfolio, according to a report on Environmental Leader.
This is how the technology would benefit a 1-megawatt data center: costs are reduced by as much as $16 million (capital expenditures), energy savings would amount to approximately 25 percent, translating in nearly $300,000 worth of savings every year.
If you’re interested in energy saving tips for datacenters, check out Energy Logic, Emerson Network Power’s 2007 road map for a prioritized approach to conserving energy. Emerson says that it’s possible to save one Watt at the processor level which in turn can conserve 2.84 Watts in total energy usage.


▼▼▼      2 Comments     ▼▼▼

Newsletter Signup
  • http://www.caverntechnologies.com Thomas Zimmerman

    With power availability becoming more of an issue, site selection based on power will be a major factor. Data Center locations with high power availability and energy efficencies will be very attractive for hosting computer systems.

    Many companies are taking advantage of underground data centers to save on cooling costs. Since cooling servers takes 40%-50% of all data center usage, cutting by 30% is a huge savings. Sun Microsystems and Marriott are two companies that have recently moved their data centers underground.

    My prediction is that companies will start looking at and moving data center services to areas with low-cost power and locations with an energy savings advantage.

  • Prof. J.R. Thome

    Even illogial technology has sometimes “logical” solutions. Using 5¬∞C air to cool 70-80¬∞C chips was only logical if energy consumption and environmental consequences were ignored. Now that they are important, sequestration of computers to put them underground is similar to the illogical idea of hiding CO2 underground for future generations to deal with. The best solution is to go directly to cooling the chips with water or an environmentally friendly, dielectric refrigerant (allowing it to partially evaporate) to take away the heat at a fluid temperature of 50-60¬∞C that can be cooled directly by outside air, ground water, etc. Both approaches can dissipate the heat directly into the enviroment without need of a refrigeration system, or better yet the heat can be recovered for other uses (building utilities, district heating, etc.). Such an operation primarily only consumes pumping power, much less than that of a compressor of a refrigeration system. This is what GREEN COMPUTING means in general terms since the energy conumption (and CO2 footprint) per computation are reduced. INVISIBLE COMPUTING is a short-term underground, off-shore or remote location solution to the illogical use of air to cool computer chips. Nowadays, the green technology roadmap should not be based on “thin air”.