Wake up daily to our latest coverage of business done better, directly in your inbox.


Get your weekly dose of analysis on rising corporate activism.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

The Efficiency Opportunity Roadmap

We partnered with Microsoft to run key findings from a recent white paper entitled "The IT Energy Efficiency Imperative" as a ten-part series. To read the full series, click here. The white paper can be downloaded here in PDF format.

If we zoom out to the 50,000 foot view, we can clearly see that there are opportunities to improve the energy efficiency of computer servers at every level, micro to macro, from the silicon within a server’s internal processors to the buildings the server are housed within.

Starting at the Silicon level, certain components, such as “green” RAM and disk drives, can use less power at normal operational loads through lower voltage or other low power designs (e.g., solid-state drives instead of hard disk drives). Additionally, certain components, such as the CPU and hard disk drives, can dynamically lower their power needs when less busy or idle, typically in conjunction with the operating system.

The operating system can employ some very sophisticated power management capabilities. By monitoring system operation, it can understand and respond to usage patterns, thereby allowing the hardware reduce its energy use.

As we have shown previously, applications can help reduce energy consumption in a number of ways. If they are designed to work well with power management, by providing utilization information back to the OS and having the ability to respond to  variable system availability, they can ensure that servers and PCs are able to save energy when idle and that user productivity is not affected by displays or systems powering off when critical tasks are running. Server applications that are designed to use IT resources dynamically and be tolerant to sudden equipment failure can dramatically improve server utilization by reducing the number of servers or virtual machines assigned to a given application. Finally, applications should be able to suspend or postpone noncritical operations when resources (IT resources or electric power) are constrained.

Hardware subsystems (CPU/RAM/disk/network) can be configured, based on examination of an application’s performance characteristics, to ensure that they are neither “starved” due to bottlenecks in other systems nor overbuilt and mostly idle. This balancing activity can significantly reduce the system’s overall power draw and reduce costs while potentially improving performance. For systems such as servers that are on nearly 24/7, an efficient power supply can significantly reduce the amount of energy consumed and more than pay for itself in energy savings. For instance, a server that requires a near-constant 500 watts of DC power will consume about 1100 fewer kWh per year using an 85% efficient power supply than one that is 70% efficient ($110/year savings at 10 cents per kWh). Most computer hardware components draw power whether they are used or not. Configuring hardware to contain only components that are needed (e.g., no sound cards in servers) can reduce an organization’s energy bill, particularly at scale (i.e., when you have many servers with the same configuration). Similarly, server costs (and associated environmental impact) can be reduced by eliminating unnecessary “cosmetic” plastics or even sheet metals.

Attention to detail in Server Management Infrastructure can also result in considerable energy savings. By using technologies such as virtualization and virtual machine migration, servers can be kept at higher rates of utilization, reducing the amount of hardware needed in the data center. Application frameworks designed to abstract individual operating system instances from the application can also help improve server utilization. Modern operating systems ship with power management enabled, but users can easily reduce this feature’s effectiveness by increasing timeouts or by disabling it altogether. By deploying a centralized power management solution, IT departments can ensure that power management is used appropriately and monitor its effectiveness. Furthermore, in addition to storage virtualization that allows storage hardware to be shared among many systems, storage management software can help significantly curb the growth of storage needs (and the associated energy consumption) through techniques such as data de-duplication, compression, and archiving.

Finally, a number of steps, taken at the building level, can also significantly improve server efficiency. By measuring energy used by the IT equipment in the data center, IT departments can calculate the data center’s effectiveness at using power for its intended purpose (i.e., running servers) rather than for powering and cooling the data center itself See PUE/CUE/WUE discussed HERE. Metrics are also being developed to measure the ratio of water and carbon emissions to power consumed by the IT infrastructure. Finally, as server workloads are consolidated onto fewer, more energy-efficient servers, it is important to understand the power and thermal load limits of the data center’s cooling and power systems. Empty space in the data center is not only inefficient from a Power Usage Effectiveness (PUE) perspective but also a wasted asset.

Check out Microsoft's new video about Data Center Efficiency:


More stories from Data & Technology