Posts Tagged ‘energy consumption’

12th June
2009
written by simplelight

One of the major issue with large data centers is power. This is applicable to both large data centers like Microsoft / Google and also to large Enterprise Data Centers which are very energy inefficient.

Definition of Power Effectiveness: Data Center Power Usage Effectiveness (PUE) is defined as the ration of data center power to IT (server) power draw. Thus a PUE of 2.0 means that the data center must draw 2 Watts for every 1 Watt of power consumed by IT (server) equipment. The ideal number would be 1.0, which means there is zero overhead. The overhead power is used by lightning, power delivery, UPS, chillers, fans, air-conditioning etc. Google claims to have achieved a PUE of 1.3 to 1.7. Microsoft runs somewhere close to 1.8. Most of Corporate America runs between 2.0 and 2.5.

A typical large data center these days costs in the range of $150 Million to $300 Million depending upon the size and location. A 15 MW data center facility is approximately $200 million. This is the capital cost so it is depreciated over time.

Most of the facility cost is power related. Anywhere  from 75% to 80% of the cost is power (pdu, chiller, ups, etc).

A typical 15MW datacenter with 50,000 servers costs about  $6.0 million per month for operating expense (excluding people cost) and the share of power infrastructure (pdu, chiller, ups, etc) is between 20% to 24% and actual power for the servers is 18% to 20%. Thus total power cost is between 38% to 44%. These numbers reflect what Microsoft / Google would do. EPA has done a study and they believe these numbers are close to 50% for inefficient data centers.

24th June
2008
written by simplelight

I was at a conference this morning where Spansion and Virident were presenting their latest flash memory technology designed to replace DRAM in web servers. Some interesting facts:

  1. Cooling and power distribution losses account for 50% of the electricity consumed in US datacenters.
  2. Datacenter power use doubled from 2000 to 2005 and will almost double again by 2010. Growth in electricity use has been slowed somewhat by the advent of server virtualization over the last few years.
  3. US datacenters use more electricity than countries like Sweden and Iran
  4. Datacenters use almost 100 billion kilowatt hours each year at approximately $0.10 per kilowatt hour. Datacenter electricity consumption is growing at 15% per year (!)
  5. Datacenter memory (DRAM) uses 2x more electricity than the total capacity of US solar panel installations.
  6. US, EU and Japan use 3/4 of the world’s electricity.

It will be interesting to see whether Spansion’s newly announced EcoRAM can put a dent in these problems. They are citing some impressive numbers:

  1. 1/5th the power of DRAM at comparable read performance.
  2. 800x faster than NAND flash access times.
  3. 30 mins to write 1TB of data on EcoRAM vs 5 hours using traditional NOR DIMM’s.

On the other hand, the representatives from Intel and AMD certainly weren’t giving their unqualified support to EcoRAM.