Data Center Efficiency: The IT Energy Efficiency Imperative

TriplePundit has teamed up with Microsoft to present a series based on a recent White Paper entitled “The IT Energy Efficiency Imperative“. We have digested key points in the whitepaper into this 10 part weekly series in order to:

  • Highlight the financial, productivity and environmental benefits of embracing IT energy efficiency.
  • Illustrate why increasing IT resource utilization from today’s low levels offers the most significant energy efficiency gains and why current efforts to improve utilization are failing.
  • Provide motivation and actionable guidance for IT and business decision makers to improve IT energy efficiency and resource utilization through technologies such as cloud computing.



When you think about greenhouse gas emissions, which do you think is responsible for more: airplanes, or computers and other IT? You might be surprised to learn that they are about the same, each industry contributing about 2% of the total global greenhouse gas emissions.

The Average Data Center has some seriously dirty secrets.

  • Less than 3% energy used by data centers is productive
  • 15% of servers are entirely idle
  • For all their processing power, data centers average 15%-20% utilization, a number which hasn’t changed in a decade

Tight budgets, rising energy costs, and limits on electric power availability are hindering the ability of information technology (IT) departments to meet the growing demand for IT services. To mitigate these challenges, it is important for organizations to embrace IT energy efficiency principles to remain productive and competitive in the face of constrained and finite resources.

Technological Advances in Data Center Hardware

The good news is that computer hardware—along with data center power and cooling infrastructure—is becoming more energy efficient and continuing to offer gains in performance and capacity.

Software solutions that improve energy efficiency, including server virtualization and centralized power management, are more widely available. Cloud computing infrastructures, both public and private, are offering significant energy efficiency gains compared to traditional IT infrastructures.

But many IT departments are failing to capitalize on these advances.

Lacking the necessary incentives or capabilities to improve IT energy efficiency, they are continuing the decades-old practice of overbuilding computer systems and thereby missing the opportunity to make their IT operations more sustainable and more responsive to organizational needs.

There is an analogy here, between computational systems and home heating equipment. Heating contractors have historically tended to oversize furnaces based on the goal that even on the very coldest day, the house can be quickly warmed up to a toasty temperature. As a result, on most days, the super-sized furnace runs only a small percentage of the time and it has barely warmed up by the time the house is comfortable. As you might imagine, this is not a very efficient way to operate the system.

A smaller furnace, supplemented on those rare bitter cold nights with a little patience, a plug-in space heater and perhaps a cardigan will run far more efficiently on this and all other days. The same logic can also be applied to all but the most mission-critical computing applications, and new software tools and cloud-enabled capabilities such as server virtualization can readily facilitate the migration to a more efficient IT operation.

Poor IT Resource Utilization

Even with advances in data center technology, many IT departments struggle with growing IT power demands and high energy costs. A key reason, and a primary focus of the white paper, is poor IT resource utilization. Despite the widespread implementation of server virtualization technologies, about two-thirds of organizations report that less than half of their production environment has been virtualized.

Average server utilization is still at or below 15% to 20% — probably no better than it was a decade ago. Many organizations, including the U.S. government, report average server utilization rates of less than half that.This is a huge waste of resources, especially considering that 15% of servers in large organizations are completely idle but still drawing power.

Despite the wide availability of server virtualization and centralized PC power management solutions, only 25% of IT departments have a plan for optimizing IT resource use, increasing energy efficiency, and
minimizing the waste generated by their IT operations. As a result, average server utilization remains at historically low levels, and in many organizations, desktop PCs waste as much as 75% of the electricity they consume.

The Challenge for IT Departments

Although financial constraints have certainly played a role in slowing down virtualization efforts, particularly in the last couple of years, IT departments still struggle to reduce their energy usage. Two related factors appear to be at the root of this phenomenon:

  • IT departments do not control all of their organization’s IT assets.
    Most IT departments do not control many of the “mission-critical” applications within their organization. This lack of control results in underutilized hardware that is often ring-fenced and unavailable for use by other applications, thus severely limiting the potential for improving overall IT resource utilization.
  • Traditional application designs make it difficult to optimize IT resource
    Most applications provide IT operators with little insight into their actual IT resource needs and requirements, which can vary significantly from moment to moment. The applications often rely on dedicated hardware for high availability, and when IT resources are constrained they simply slow down, sometimes with unexpected or unwelcome results.

As server performance and storage capacity increase, IT resource utilization and overall IT energy efficiency will continue to decline unless these factors are addressed. The underutilization of IT assets wastes money and takes a toll on the environment that extends far beyond the energy consumed by hardware. Computer manufacturing is a resource-intensive process, many of the materials and energy resources are damaging to the environment when extracted or processed, and some are in increasingly short supply.

By embracing energy efficiency in areas such as system architecture, hardware provisioning, software design, and operations, organizations can reduce their IT budgets and respond more rapidly to demands for additional services, thereby improving their overall productivity and competitiveness.

This is particularly important as fiscal constraints, coupled with the rising cost of energy needed to power IT systems, make it more difficult for organizations to add new IT capacity. Limited electric power supply, on the grid and within data centers, is also constraining the amount of IT equipment that many organizations can add to their data centers.

We’ll be exploring these issues in depth in the coming weeks. Follow along here

6 responses

  1. I’m curious to know how much of an impact the weather has on data center efficiency. ie – why would you locate one in Arizona where it’s 120 degrees out in the summer. Why are there not loads of Data Centers in Duluth? You could cool them with water from Lake Superior… or is proximity to your facility really that important?

    1. Dave, great question. Suffice to say there are many factors that go into the decision to site a DC. This includes locations where renewable resources or alternative energy choices are available, and where the climate allows use of air-side economization rather than chillers. There are many other factors also considered, like security, bandwidth availability, proximity to customers, fault lines, etc etc. So while Duluth may have temperate, these and other factors will also play a role.

  2. It would be helpful to know what utilization means graphically, because peak demand and average utilization could be close or far. As in our electric grid effective and efficient design probably has to consider peak demand over average utilization.

    1. A study Growth in data center electricity use 2005 to 2010 released this summer by Stanford University professor Jonathan Koomey pegs growth in energy use among U.S. data centers at 36 percent from 2005 to 2010 – which is slower than some had predicted but nonetheless significant…

Leave a Reply