On Tue, Aug 01, 2006 at 08:03:58PM -0700, Mark Schoonover wrote:
get watts. I think a better way is to determine the max current that the system will use. If you have a 500 watt PSU, assume 85% efficiency, then by using the input voltage you can calculate the max input power the server is going to draw. I'd design the datacenter to support the max power level the server will need, not just what it takes to run the thing. Start up will draw the most power, approaching the max power output of the PSU, then it'll lower some once all the drives are up and spinning.
Using this method, you will probably overbuild for power.
Power supplies in good servers are much higher-capacity than is necessary for the typical application.
Regarding drive spin-up, i think a reasonable number for spin-up of a 15k drive is 30 watts (i couldn't quickly find a spec for spin-up power, but ~10 watts idle is 'typical' for seagate 15k drives). Three of those per server is 90 watts. During spin-up time, the CPU(s) will be idle so will be drawing a lot less power than they would at full load, so you have a bit of savings there.
Depending on your application, overprovisioning your power might not be a bad thing - next year's servers will probably use more.
It's somewhat an art, but you can get reasonably close to the entire power draw of your rack. In a pinch, I've added up all the UPSes and told the electrician 12000VA. They'll know what to do with that number.
This is good advice.
The "kill a watt" meters referenced in another post are a great suggestion too.
danno -- dan pritts - systems administrator - internet2 734/352-4953 office 734/834-7224 mobile