« Back to Home Page

Hewlett Packard “Redstone” Server to Cut Energy Use 89%

| Tuesday November 1st, 2011 | 1 Comment

HP's Paul Santeler Reveals "Redstone"

According to Hewlett Packard, if no significant changes are made by 2015, an astonishing ten million servers will need to be employed to meet the world’s computing needs. Such infrastructure amounts to a huge and growing drain on the world’s energy grid – no less than 33Billion Kwh per year.

Right now, every day, about 7000 servers are brought online globally. That represents about 7500 square feet devoted to data centers on a daily basis. Why? Massive growth in user applications, mobile in particular, as the back end, or “cloud,” slowly fills up with data and number crunching power.  To put things in perspective, more than 2 million videos are being watched every minute online as you read this. That’s serious, growing data and demand for data services.

Before today, I would have described a server to you as a large computer in a box with a whirring fan (that’s one reason I’m not a tech writer). This morning’s visit to HP has put a few things in perspective and given me some optimism in terms of the role high technology can play in reducing the weight of its own expansion.

Paul Santeler, VP/GM Hyperscale Business Unit (a part of HP dealing with massive orders of tens of thousands of servers) accepts that by 2015 this sheer volume will represent a crisis. If we don’t do something different the sheer volume of energy and space consumed by 10 million servers will start to hit a wall.  So, the company has an audacious goal: to reduce server load to 1/10th the current energy consumption while radically shrinking their physical size.

Like all smart companies, HP sees opportunity in crisis.   The first result? A new server called “Redstone” which fundamentally re-designs server architecture.  By taking out current CPUs, and working with technology originally designed for mobile devices, where CPUs have been optimized to sip energy lightly on very small batteries, the company and their partners have radically improved efficiency.

Secondly, something called a “federated architecture” has be employed.   By tossing out the “large whirring boxes” with their own cooling and power systems, the federated system basically means that a rack of servers can share most resources.  This mean cooling, power, networking systems, storage and so on – meaning much more saved space, energy and cost.

Four tiny “servers” no larger than the CPU chip you probably have in your old PC can sit on a small panel the size of two decks of cards. The panels can be loaded together by the hundreds into a box the size of a suitcase that  would have otherwise taken a wall-sized rack.

And the ROI? A typical project illustrated by HP would cost a client $3.3 million to buy and operate over 400 traditional servers in 10 rackes and oodles of other gadgetry. With the Redstone running 4 times as many servers in merely half a rack and the entire system drawing a tenth of the energy the total price drops to $1.2Million. Not too shabby.

There are two schools of thought with regards to the building of our sustainable future – one that argues that technology will always advance to address any and all problems we face, the other that suggests only radical redefinitions of wealth, economy, and culture can provide for a viable future for humanity.  The truth is obviously some combination of the two.

Without getting into the physical consumption of resources and e-waste issues that even new technology still requires, HP is to be applauded for using their top core competency to address a real world crisis and to do so economically.


▼▼▼      1 Comment     ▼▼▼

Categorized: Clean Technology|

Newsletter Signup
  • http://blogs.technet.com/markaggar Mark Aggar

    Or, we could satiate the demand for much of growth in the world’s future computing demands by using what we already have more effectively. Average server utilization is incredibly low (< 10%) and we need governance changes that encourage software developers and IT operators to make better use of these assets. The cloud helps in this regard, but software developers still need to do their bit.