logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

logo

The best of solutions journalism in the sustainability space, published monthly.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

A Vision of the Future for IT Efficiency

By Microsoft Data

We partnered with Microsoft to run key findings from a recent white paper entitled "The IT Energy Efficiency Imperative" as a ten-part series. To read the full series, click here. The white paper can be downloaded here in PDF format.

Although energy efficiency is important, it doesn’t pay if it significantly reduces productivity, performance, and reliability. Some important systems are underutilized by design. For example, no one would seriously suggest sharing fire trucks among airports many miles apart to increase their utilization.

Understandably, IT pros are highly cautious and will advise against operational practices that introduce real or perceived risk. This is despite the fact that the vast majority of applications are not going to cause significant harm if they are oversubscribed or unavailable for short periods of time.

Unfortunately, most applications are provisioned as if an outage would be catastrophic. To avoid this kind of wasteful overprovisioning, it is important for application developers and business owners to work together with IT departments to define appropriate levels of performance, resiliency, and recovery for each of their applications.

Furthermore, if applications are designed with energy and resource efficiencies as key criteria, reliability is likely to be better than it would be when using traditional IT resource provisioning practices, which are prone to uncertainty and error. Applications that are able to dynamically report their performance and adjust to constraints should be much easier to keep in compliance with their service level agreement (SLA) compared to applications whose performance and availability requirements are only expressed on paper. Applications designed with SLA management in mind can also help eliminate redundant and expensive layers of resiliency that often plague traditional high-availability designs.

IT decision makers in the future will be better able to meet the needs of their organization and respond to demands for new IT services more quickly and cost effectively if “energy efficiency by design” becomes a fundamental IT tenet.

Imagine this future scenario: IT departments in large organizations have achieved significant operational flexibility and energy efficiency by running IT as a utility, using public cloud computing platforms for line-of-business applications and commodity services such as email, as well as their own private clouds for applications that must remain on premises for compliance or technical reasons. Smaller organizations mostly use applications run on a public cloud, having retired all of their servers except those running legacy applications that must run on premises.

The vast majority of new applications are developed and deployed directly on public and private clouds. Most legacy applications that were not designed with the cloud in mind have been migrated to private or public clouds as virtual machines.

Departments outside of IT rent cloud computation-and-storage capacity through the IT department and are not allowed to buy servers to be housed in the data center. The IT department works with their clients to determine where the application should best reside (private or public cloud) based on technical and regulatory constraints, and it passes the resulting operating costs through to the application owner. Owners of applications that are deployed in a private cloud provide periodic usage forecasts to the IT department to help determine the quantity of IT hardware needed to adequately support demand.

The usage of trend reporting and instrumentation in new applications makes this forecasting easier. Because application owners pay for computing resources on a usage basis, they have an incentive to ensure that their applications can be dynamically scaled based on demand and to implement throttling mechanisms for applications with unpredictable demand. Many applications provide mechanisms to postpone noncritical work to provide additional “virtual” capacity for critical application services if there is a shortfall in IT resources or if power is constrained. Applications designed specifically for the private cloud can extend into the public cloud (in a process known as cloud bursting) if additional capacity is required.

Many of the new applications are also designed to be resilient enough to survive server and data center failures without the need for expensive clustering or other failover technologies. As a result, the overall utilization of powered-on servers dramatically increases. Because the IT department buys, owns, and controls all of the IT hardware within the organization, it determines when the hardware will be deployed, powered on and off, refreshed, or decommissioned to maximize energy efficiency and productivity. Server configurations are right-sized and balanced to optimize utilization by the application portfolio. With the aid of software and hardware power management technologies, computers continue to improve their energy use by more closely scaling it with IT utilization. Excess capacity, particularly when initially deployed, can be temporarily turned off until it is needed, but it is ideally made available on a spot market for computational cycles.

The data centers themselves are constructed with energy-efficient components and a minimal environmental footprint. They require little protection from the elements because they are built with modular, weatherized components, which cost dramatically less in terms of materials (such as concrete and steel) and construction costs. Where possible, waste heat from data centers is captured to preheat water for commercial or residential purposes.

The organization’s client computing infrastructure is similarly energy efficient. The power settings of desktop PCs are centrally managed; the PCs automatically sleep when idle but can be woken (even remotely) by the end user or system administrator. IT pros ensure that applications running on mobile and desktop PCs are energy smart and don’t keep the devices awake when they are not in use.

Users who temporarily need additional computers can “check out” virtual machines running on servers to avoid buying an additional PC that will likely be underutilized over the long term. Corporate IT hardware purchasing policies require that all hardware—servers, clients, displays, storage, networking, and peripherals—meets strict energy efficiency and IT resource consumption criteria and is designed with the environment in mind. This includes a strategy for effective and responsible reuse and recycling of unneeded equipment.

IT departments that operate in this way can be dramatically more responsive to their organization’s needs and significantly reduce the amount spent on IT across the organization. If implemented at scale across the countless IT organizations worldwide, these actions would enable IT use to grow considerably while conserving an enormous amount of energy, raw materials, and water and significantly reducing IT-associated carbon emissions and other pollution. These practices could lead to a doubling or tripling of the average server utilization rate and could cut data center energy use by more than half—essentially flattening the current growth rate.

We'll be exploring these issues in depth in the coming weeks. Follow along here