Comments Locked

67 Comments

Back to Article

  • gjmck - Monday, December 19, 2005 - link

    I'm curious that the numbers dont reflect the true difference between equivalently configured Intel vs. Opteron systems.

    The Dempsey processor TDP max is 130W and the Opteron is 95W. That difference is only 35W. The memory controller needed by Dempsey should only consume 60 - 80W. Using 80W that gives the maximum total difference between two eqivalently configured systems as 80 + 35 = 115W.

    Yet in the max processor utilization tests the difference was 214 Watts. So where is the extra 99 Watts being used? FBD? If so then when Opteron uses similar memory technology the delta will not be as great.

    Gregg McKnight
  • Furen - Thursday, December 22, 2005 - link

    Intel's TDPs reflect "typical" power draw, while AMD's reflects the "worst-case scenario" power consumption, so they're not directly comparable. I very much doubt the memory controller uses even close to 80W, I'd say something like 20-30W for the whole northbridge is reasonable. FB does use more power, but that shouldn't be more than 5-10W per dimm. The rest is just the CPU being insanely power-inefficient.
  • dannybin1742 - Friday, December 16, 2005 - link

    to keep anthing at a constant temperature, the heat going into the system must equal the heat being taken away. so if one system uses 200W of power, first you have the cost of the 200W, then you have the cost to remove the 200W of heat given off by the use of the system. on top of this air, conditioners are 20-25% efficient at best (if i remember correctly), so the amount of power used to remove the heat generated would take 3-4X more energy to remove. so in essence you are looking at at LEAST 2X amount of money calculated in the article. (i took a year of thermodynamics at school here, when i was an undergrad) in reality, you are probably looking at 4-6X to run and remove the heat from the data center. they should have looked at the opteron 2.2ghz HE (low voltage) i'd be interested to see what power numbers those put up.

    also, was winxp 2003 server 64 bit? or were all the tests run in 32 bit? i just skimmed over the article. how about linux?
  • coldpower27 - Friday, December 16, 2005 - link


    Opteron 270 HE is the highest of the lower wattage 2 Way Opterons and it runs at 2.0GHZ.

  • Viditor - Friday, December 16, 2005 - link

    quote:

    Opteron 270 HE is the highest of the lower wattage 2 Way Opterons and it runs at 2.0GHZ

    You mean 2 way dual core...
    The 250 HE is single core at 2.4 GHz...
  • coldpower27 - Friday, December 16, 2005 - link

    Yes, I assume 2 Dual Core vs 2 Dual Core.
  • haris - Friday, December 16, 2005 - link

    One question that kept nagging me was "How many "threads" were required to get the systems to each load level?" How much of a difference would it make to performance/watt if you have to take into account that processor 1 is also handling x% more/less threads then processor 2?
  • Jason Clark - Friday, December 16, 2005 - link

    That will teach me for just taking a $1,000 measurement devices reported figures :) It actually figures out the cost, which obviously was wrong. I've updated the numbers, they should be correct.


    Again, sorry :)
  • coldpower27 - Friday, December 16, 2005 - link

    Thanks alot.:)
  • Biffa - Friday, December 16, 2005 - link

    With over a 1Ghz defecit (yes I know) in processor speed, and with only 1MB of cache per core rather than 2MB, I think we can safely say that Intel is still clutching at straws at this level of the game.

    Good PR on their part (always admired them for that) however its a crying shame that after all this time this is the best they can do.
  • OrphanBoy - Friday, December 16, 2005 - link

    Ok, I haven't got one of these new chips, but my work machine that I'm sitting at right now has 2x3.6GHz Noconas with a 7800 GTX and I often run it at full whack - the amount of power I must be drawing has to be huge!

    Nearly half a kilowatt per machine is a scary thought!
  • Cygni - Friday, December 16, 2005 - link

    Imagine a Pentium D EE on an Nvidia Intel SLI board with those quad-7800GT's from over at Toms. Maybe throw a few a nice RAID 5 array in too. Thats gotta pull down some SERIOUS wattage! Cant wait until the day that turning my gaming rig on for the first time trips my breaker, haha.
  • Cygni - Friday, December 16, 2005 - link

    Of note, these arent 64bit results. I get the feeling that 64bit linux results would favor the Opty even more.

    The FB-DIMM controller and use of multiple FSB's (FINALLY!!!) really boosts the performance here to serious competition levels. This box would make a serious workstation powerhouse.

    But, as noted, Opty will likely have moved on quite a bit by the time this package is released. And it has better 64bit. And its platform is already available. But it is certainly interesting to see Intel really dominate a performance benchmark for the first time in a long time.
  • Peter - Friday, December 16, 2005 - link

    >use of multiple FSB's (FINALLY!!!)

    ... and only two years after AMD abandoned the dual-FSB approach and went HyperTransport.

    For those who forget quickly :) the Athlon-MP chipset (AMD762 north) was using dual independent FSB.
  • Viditor - Friday, December 16, 2005 - link

    quote:

    Of note, these arent 64bit results

    A very good point...it doesn't make sense with the memory used to not use 64bit as well.

    Jason, could you let us know why you used 32 bit instead of 64 bit?
  • Jason Clark - Friday, December 16, 2005 - link

    We're going to look at 64bit in the new year. You have to realize most all of the general public running a windows 2003 server are running 32bit still. 64bit is not quite as adopted as you may think. That being said, we are going to look at 64 bit in the new year with sql 2005.

    Cheers.
  • Viditor - Friday, December 16, 2005 - link

    quote:

    we are going to look at 64 bit in the new year with sql 2005

    Fair enough, and thanks for the first peek!
    Enjoy your Holidays, then get out there and find us some MORE cool stuff to learn! ;)

    Cheers,
    Charles
  • Cygni - Friday, December 16, 2005 - link

    Its likely what Intel let them run at the time, as i doubt they shipped Anandtech a working system for their own use. :D

    Also, driver support is probably in its infancy. May not even have linux or x64 support today. Probably only Intel knows. But i think we can assume that 64bit will be similar to the current Xeons.
  • Griswold - Friday, December 16, 2005 - link

    Opteron 280 = available now
    "Bensley/Dempsey" = ???

    Btw, What about the Opteron 285SE at 2.6GHz that is exclusively used in the SUN Fire X4200? Should rectify the performance chart as well.
  • Heinz - Saturday, December 17, 2005 - link

    Yes that issue also catched my eye, actually it is a little bit a apples to oranges comparision. There are systems available now set up against systems available in Q2 or even H2 next year. Quote:
    quote:

    Demspey is going to take us well into Q2 of next year, and Woodcrest will appear sometime in the second half of next year. Woodcrest will be a lower wattage part that is focused on performance per Watt.


    Well .. one should look at the AMD roadmap, what's in AMDs portfolio by then. Then you can declare a "winner" for the 2006 server market. Then the choice is really possible. Up to now there is not *any* choice. Everybody has to buy the AMD system, as the Intel is simply not available.

    I know that you cannot test Socket F now, thus you end up with the next best solution, which is an Opteron 280 system, but my point is, that you should have at least mentioned Socket F. These can/may (still speculation) deliver performance increases for AMD in 2006, too.

    Without that, the pure bottomline (performance) results of your article is, that a 2006 system is better than a 2005 one. Not really great news... even if it is about an Intel system being faster than AMD :)

    So for more objective articles, please try to cover all point of views of the industry. Without that, bad boys might question anandtechs independency, exspecially after you were invited to a nice(?) trip to the Intel headquarter ... no offense here, just trying to make a fair comment. After all it was a nice overview over the next-gen Intel platform with a lot of information.

    byebye

    Heinz
  • Furen - Sunday, December 18, 2005 - link

    I think the comparison is fair enough considering that Benseley should be coming out within the next 3-5 months. The most I'd expect from AMD by then is maybe a bump in clock speed. Socket F is scheduled to come out in Q4 2006, if I remember correctly, and mentioning it would serve no purpose since we know absolutely nothing about it.
  • Heinz - Monday, December 19, 2005 - link

    Ok, if your information would be correct I would agree, however Socket F is due to H1/2006, i.e. the same 3-5 month timeframe we have to wait for Bensley. Thus Bentley is not competing with the tested S940 Sytem but with a Socket F System.

    Look here:
    http://www.pcstats.com/NewsView.cfm?NewsID=46731">http://www.pcstats.com/NewsView.cfm?NewsID=46731

    Apart from that, I do not see your point about not being interested in Socket F at all. Of course, there is little information about it, but what it is sure is that it will be presented in 2006. Because new product generations are normally faster/better than the old ones (old tradition in the computer, if not any market ;-). I would be interested to know that.
    It is like saying the new Volkswagen is next year best car on the market, because it is better than the current competition, without mentioning that there will also be a new Toyota model.

    If the information from pcstats is incorrect and anandtech got better information about a delayed/later launch of the Socket F platform, I apologize. But then again ... it should have been mentioned in the article :)

    byebye

    Heinz
  • Furen - Thursday, December 22, 2005 - link

    Interesting, that's the first time I've seen that road map. I was kind of hoping that Socket F would come out at the end of the year with FB support, since I think tri/quad-core CPUs may be bandwidth limited with two DDR2 channels, then again the fact that they'll be DDR667 dimms may help enough.
  • IntelUser2000 - Friday, December 16, 2005 - link

    The first Intel dual core for dual core servers is out and the CPU is called Paxville. It uses the aging Lindenhurst chipset with single 800MHz FSB while Bensley will use dual 1066FSB. The highest clock for Paxville is 3.0GHz.

    http://www.realworldtech.com/page.cfm?ArticleID=RW...">http://www.realworldtech.com/page.cfm?ArticleID=RW...

    It says Bensley with Dempsey will be out Q1 of 2006.

    http://www.theinquirer.net/?article=27789">http://www.theinquirer.net/?article=27789

    3.2 and 3.46GHz will be 1066FSB.
    2.5, 2.83 and 3GHz will be 667FSB.

    MV versions will be 3.2GHz with 1066FSB.
    LV versions will be 2GHz with 667FSB(ouch).
  • coldpower27 - Friday, December 16, 2005 - link


    Interesting, though with that projected clock frequency on the LV Dempsey, they might as well use the Sossaman processor. As the Roadmap in this article no longer points out LV Dempsey parts.
  • Griswold - Friday, December 16, 2005 - link

    Mostly information I didnt ask for / can be found in the AT article. Thank you for that.
  • Frallan - Friday, December 16, 2005 - link

    I dont mind that AMD has held the lead for quite some time bc they needed it badly. But it wasn't good that Intel had nothing that could compare. Now at least Intel is on the map again as AMD which might acctually force AMD to go a bit faster again.

    Then on the topic of power, if it is 8k, 20k or 50k you save per year by buying product A instead of B isn't important if everything else is equal. 8k is more then enough to rule in favour of AMD (remember everything else equal).

    The future looks interestin and Im gonne pick a Denmark up soon :0)
  • menting - Friday, December 16, 2005 - link

    problem is that even if performance is equal..price parity on the cost of servers aren't... :)
    you get a huge discount if you go mostly intel.
  • phaxmohdem - Friday, December 16, 2005 - link

    Wow! A Dual core server/workstation chip. Way to go Intel! AMD better get its $hit together soon.... Oh wait...

    Do they make a special Silver stake or bullet that will ever kill fvcking Netburst?
  • coldpower27 - Friday, December 16, 2005 - link

    These are probably the last NetBurst based server processor you will see for this segement, the next slated update is Woodcrest core over the Dempsey core used now, which is part of Intel's NGMA.
  • Furen - Friday, December 16, 2005 - link

    I must say that performance is very good on these (seriously). The cost may be a bit prohibitive (then again, decent servers are always expensive as hell) since it introduces FB-DIMMs (and 4 channels for this performance). Also, I would like to see someone test these at a 667 FSB just to see how much of a choke point it becomes, since every Dempsey besides the top end 3.46GHz one will use this (I think).
  • Beenthere - Friday, December 16, 2005 - link

    Intel holds a Dog & Pony Show for some hand picked journalists it feels will be "Intel friendly" as a result of getting the "scoop" over the mainstream PC media on Intel products as far as three years off. Then Intel proceeds to provide prototype CPUs for testing months if not years before they will actually be available. What a manipulation of the media and public opinion.

    This is damage control in action folks. Intel is desperate to save face and as many customers as it can while it hopelessly tries to deliver some competitive product in a year or two. The problem is AMD is so far ahead in technology, they can just release better CPUs any time they desire and Intel has nothing to counter AMD's superior products. Even the Intel fanboys and "media friendly journalists" have had to admit that purchasing an Intel product now or in the foreseeable future would be a very poor investment.

    The bad situation Intel is in couldn't happen to a nicer, more deserving company IMNHO.
  • ElJefe - Friday, December 16, 2005 - link

    Bensley. thats a gay name.

    The chip does nicely though.

    i hope opteron makes something nicer. i mean, just increase the speed .2ghz on the chip and it will most likely blow it away and still not be power hungry.

    hm.

    1st
  • Brian23 - Friday, December 16, 2005 - link

    The cost per month is wrong.

    Example:

    The intel chip pulls 479W at 100% load.
    In 24 hours, that's 24*479 = 11496W/d
    Assume 31 days in a month, thats 11496*31 = 356,376W/m = 356kWh/month
    Assume 14 cents per kWh 356*0.14 = $49.89 dollars per month
  • coldpower27 - Friday, December 16, 2005 - link

    Actually I think I agree with you.

    1 kWH = 3,600,000J

    Worse Case Scenario
    479W = 479J in 1 Second, 1,724,400J in 1 Hour, 41,385,600J in 1 Day, 1,282,953,600J in 31 Days.

    Divide by 1 kWH = 3,600,000
    = 356.376kWH

    Multiply by 0.14/kWH
    = $49.89 Per month, the above poster is correct.
  • coldpower27 - Friday, December 16, 2005 - link


    The actual difference between running 40 Opteron Systems & 40 Bensley Systems for 1 Year @ 40-60% Load comes to a difference of $5890.4 ~ 1/10 the amount Anandtech reports.
  • coldpower27 - Friday, December 16, 2005 - link

    Disregard, used money figures of 172 and 292 for Wattage Oops :P.
  • coldpower27 - Friday, December 16, 2005 - link

    In regard to $5000 ish figure.
  • Furen - Friday, December 16, 2005 - link

    The difference is around $8140 for 40-60% load (which is realistic) and around $10,500 ($23,500 compared to around $13,500) for full load.

    The problem, however, is that the system's power consumption is not the only thing a data center deals with. The more power the system uses, the more heat it throw off. Energy consumption for cooling can match the system's power consumption. Another thing to take into account is the AC-DC and DC-AC power conversion inefficiencies (this is before even hitting the system's power supply, which will lead to even more inefficiency) which will probably add another 20-30% to the real power consumption. So instead of having a difference of $8140 you end up with a difference of $19,536, and that's assuming that you don't need to purchase any new equipment aside from the 40 servers themselves. Another VERY important thing is power density. You could conceivably throw 64 1U systems onto a single rack using Opterons, with a ~17KW peak power draw, but 64 1U Benseley systems would require a peak power draw of ~31KW, not to mention that it's probably very stupid to stick 2 Dempseys into a 1U system (but hey, I'd say the same thing about sticking 4 dual-core Opterons onto a 1U system but people still do it).

    That is not to say that Anandtech's data is right, 'cause it isn't. I just wanted to point out that though measuring power consumption in itself is important, trying to draw conclusions from the power consumption BY ITSELF is not very useful, since it ignored all other related costs and limitations.
  • coldpower27 - Friday, December 16, 2005 - link

    How does a difference of $8140 increase to $19,536 which is an increae of over 130%?
  • Viditor - Friday, December 16, 2005 - link

    quote:

    How does a difference of $8140 increase to $19,536 which is an increae of over 130%?


    1. Double the number (probably more, but for the sake of argument make it double) for the difference in air conditioning.
    2. The PSU drawas power at about (average) 75-80% efficiency, so increased power demand increases the loss from PSU inefficiancy

    I can't say that the AT number is right, but I can't say it's wrong either...
  • Furen - Friday, December 16, 2005 - link

    Like I said, cooling normally matches the system's power consumption, so the difference is actually 2x $8,140. To this you apply a 20% increase due to inefficiency and you get $19,536. This is why everyone out there is complaining about power consumption and this is why performance per watt is the wave of the future (this is what we all said circa 2001... when transmeta was big) for data centers. I think Intel will have a slight advantage in this regard when it releases its Sossaman (or whatever he hell that's called) unless AMD can push its Opteron a bit more (a 40W version would suffice, considering that it has many benefits over the somewhat-crippled Sossaman).
  • Viditor - Friday, December 16, 2005 - link

    quote:

    I think Intel will have a slight advantage in this regard when it releases its Sossaman (or whatever he hell that's called) unless AMD can push its Opteron a bit more (a 40W version would suffice, considering that it has many benefits over the somewhat-crippled Sossaman).


    Sossaman being only 32 bit will be a fairly big disadvantage, but it might do well in some blade environments...
    The HE line of Dual Core Opterons have a TDP of 55w, which means that their actual power draw is substantially less. 40w?...I don't know. If they are able to implement the new strained silicon process when they go 65nm, then probably at least that low...
  • Furen - Friday, December 16, 2005 - link

    Yes, that's what I meant by the "...considering it has many benefits over the somewhat-crippled Sossaman"... that and the significantly inferior floating-point performance. I've never worked with an Opteron HE so I can't say how their power consumption is. The problem is not the hardware itself, though, but also the fact that AMD does not PUSH the damn chip. It's hard to find a non-blade system with HEs, so AMD probably needs to drop price a bit on these to get people to adopt them on regular rack servers.
  • Viditor - Friday, December 16, 2005 - link

    quote:

    I've never worked with an Opteron HE so I can't say how their power consumption is

    Me neither...and judging it by the TDP isn't really a good idea either (but it does give a ceiling). Another point we should probably look at is the FBDimms... I've noticed that they get hot enough to require active cooling (at least on the systems I've seen). I know that Opteron is supposedly going to FBDs with the Socket F platforms (though that's uncomfirmed so far). This brings up 2 important questions...
    1. How well do the Dempseys do using standard DIMMS?
    2. How much of the Dempsey test system power draw is for the FBDs?
  • Heinz - Saturday, December 17, 2005 - link

    quote:

    I know that Opteron is supposedly going to FBDs with the Socket F platforms (though that's uncomfirmed so far).


    Go to:

    http://www.amdcompare.com/techoutlook/">http://www.amdcompare.com/techoutlook/

    There, FBDIMM is mentioned in 2007 (Platform overview).
    Thus, either the whole Socket F platform is pushed to 2007, or Socket F simply uses DDR2 or maybe both :)

    But I guess it will be DDR2. Historically AMD uses old, reliable RAM techniques at higher speeds, like PC133 SDRAM, DDR400.. and now it is DDR2-667/800.

    Just wondering, if AMD will introduce then another "Socket G" already by 2007 ?

    byebye

    Heinz
  • Furen - Friday, December 16, 2005 - link

    Dempseys should perform as badly as current Paxvilles if they use normal DDR2. This is because I seriously doubt anyone (basically, Intel) is going to make a quad-channel DDR2 controller since it requires lots and lots of traces and there's better technology out there (FB IS better, whatever everyone says, it's just having its introduction quirks). Remember that Netbursts are insanely bandwidth starved so having quad-channel DDR2 (which is enough to basically saturate the two 1066MHz Front Side Buses) is extremely useful in a two-way system. Once you get to 4-way, however, the bandwidth starvation comes back again.

    Intel is being smart by sending out preview systems with the optimal configuration and I'm somewhat disappointed that reviewers dont point out that this IS the best the chips can get. Normally people dont even mention that the quad-channel DDR2 is what gives these systems its performance benefits but let it be assumed that its the 65nm chips that perform better than the 90nm parts (for some miraculous reason) under the same conditions. That why I'm curious about how these perform on a 667MHz FSB. Having quad-channel DDR2 533 is certainly not useful at all with such a low FSB, I think. Remember how Intel didn't send out Paxvilles out to reviewers? I'd guess that the 667FSB Dempseys will perform even worse than those.
  • IntelUser2000 - Friday, December 16, 2005 - link

    quote:

    Dempseys should perform as badly as current Paxvilles if they use normal DDR2. This is because I seriously doubt anyone (basically, Intel) is going to make a quad-channel DDR2 controller since it requires lots and lots of traces and there's better technology out there (FB IS better, whatever everyone says, it's just having its introduction quirks). Remember that Netbursts are insanely bandwidth starved so having quad-channel DDR2 (which is enough to basically saturate the two 1066MHz Front Side Buses) is extremely useful in a two-way system. Once you get to 4-way, however, the bandwidth starvation comes back again


    What do you mean?? Bensley uses FB-DIMM 4-ch if you didn't know. And people who researches into these stuff/have looked into technical details say, Intel's memory controller design rivals the best, if not better.

    Also, Bensley uses DDR2-533, while Lindenhurst uses DDR2-400. We all know that DDR2-533 is faster than DDR2-400.

    quote:

    That why I'm curious about how these perform on a 667MHz FSB. Having quad-channel DDR2 533 is certainly not useful at all with such a low FSB, I think


    Yes, but that's certainly better than Dual channel DDR2-400 with 800FSB since Bensley will have two FSB's anyway.

    Paxville and Dempsey has same amount of cache, only difference being Dempsey is clocked higher.
  • Furen - Friday, December 16, 2005 - link

    If you read what you quoted again I said that Dempseys should perform the same as current Paxvilles if they use NORMAL DDR2 (as in not FB-DIMMs). I know there are 4 FB channels on Benseley but Viditor said that he wondered how Dempseys would perform on regular DDR2. Yes, Intel's memory controllers are among the best (nVidia's DDR2 mem controller is slightly better, I think) but I said that I don't believe they will make a quad-channel DDR2 northbridge, since the amount of traces coming out of a single chip would be insane. You're correct about Linderhurst sucking.

    Consider that having two FB533 channels. Basically the two CPUs (4 cores) will share the equivalent of a FSB1066 in memory bandwidth, having an insanely wide FSB doesn't help if you dont have any use for it and memory bandwidth limitations always hurt Netbursts significantly. The same could be said for having 4 FB channels and running at dual FSB667. I dont even want to think about Intel's quad-core stuff, though the P6 architecture is much less reliant on memory bandwidth than Netburst.
  • Viditor - Friday, December 16, 2005 - link

    Some good points Furen...it seems to me that with the Dempsey on Benseley, Intel will finally be competitive in the dual dual market. I agree that the quad duals will still be a problem for them...
    I think that in the dual dual sector though, while I doubt Intel will regain any marketshare, they may at least slow down the bleeding.
  • coldpower27 - Friday, December 16, 2005 - link

    And I am interested how did you get a difference of $8140 to begin with.
  • Furen - Friday, December 16, 2005 - link

    I took a lot of shortcuts just to get a rough approximation but here it goes:

    406W - 260W = 166W
    166W * 24H/day = 3984WH/day
    3984WH/day * 365 days = 1,454,160WH/year = 1,454kWH/year
    1,454kWH/year * $.14/kWH (which is overpriced, by the way, since consumers normally pay more than businesses) = $203.58/year

    $203.58/year * 40 systems = $8143.30/year for 40 systems.
  • Viditor - Friday, December 16, 2005 - link

    Good point...forgot about the power conversion and power supply loss.
  • Furen - Friday, December 16, 2005 - link

    I didn't mean PSU power loss, but rather that many data centers convert the AC input to DC at the distribution centers and the convert it to AC again just before sending it to the server, since servers are not built to operate on DC.

    PSU loss is already reflected in anadtech's measurements, since power consumption is measured at the plug.
  • Furen - Friday, December 16, 2005 - link

    man, I would kill for an edit function on comments...
  • Viditor - Friday, December 16, 2005 - link

    quote:

    The actual difference between running 40 Opteron Systems & 40 Bensley Systems for 1 Year @ 40-60% Load comes to a difference of $5890.4 ~ 1/10 the amount Anandtech reports

    You're forgetting the cost of cooling (which is much higher than just the CPU...)
  • coldpower27 - Friday, December 16, 2005 - link


    They are measuring total power draw of the 2 systems which includes the energy used by the cooling system. I am not forgeting anything. I am only interested in cost of electricity used by the 2 systems.

    Anandtech isn't incorporating cost of cooling into it's numbers either.
  • Jason Clark - Friday, December 16, 2005 - link

    If we had complete systems from both vendors, this would have been possible. Unfortunately, we had a pre-production validation platform and a motherboard and two cpus from amd :)... So, what we did was make the bensley system as close to the open air opteron system as we could. I agree, we need to get some complete systems with their cooling mechanisms in place, and we'll work on the vendors next year for that.
  • Viditor - Friday, December 16, 2005 - link

    The cooling systems I refer to are the airconditioning, not the HSF or the case fans...
    By doubling the heat output, you are also doubling the air con requirements.
  • coldpower27 - Friday, December 16, 2005 - link

    Which would be offset by the heating provided in the Winter time.
  • Viditor - Friday, December 16, 2005 - link

    LOL...good answer! (especially from coldpower)
  • Viditor - Friday, December 16, 2005 - link

    BTW...you do know (I assume) that they don't use heaters in a data center, right? (I figured you did, but thought I'd check just to be sure...)
  • JarredWalton - Friday, December 16, 2005 - link

    Not entirely true. They use "environmental regulators" that keep humidity and temperature in a set range. In the winter, the AC portion does less, but the fans are still going full blast. I should know, as I'm sitting here listening to the 75 dB hum of a large regulator right now. :|
  • Viditor - Friday, December 16, 2005 - link

    quote:

    Not entirely true. They use "environmental regulators" that keep humidity and temperature in a set range. In the winter, the AC portion does less, but the fans are still going full blast. I should know, as I'm sitting here listening to the 75 dB hum of a large regulator right now. :|


    That's my world as well...(TV broadcast equipment is kept under the same conditions...) so you have my condolences. But the point I was trying to get across was that the AC never gets turned off when it's a nice day outside...:)
  • coldpower27 - Friday, December 16, 2005 - link

    Finally if we assume 40 Centers for 1 Year.

    $587.85 for 365.25 Days for 1 Bensley System.

    $23,514 Total to Run 40 Systems for 1 Year.
  • Poser - Friday, December 16, 2005 - link

    In addition to the math being off, the assumption that a datacenter would be paying a similar kW/h rate as a residential customer also seems suspicious. As a major customer, they couldn't negotiate a much better rate?
  • coldpower27 - Friday, December 16, 2005 - link

    Watt is Joule/Second isn't though??

Log in

Don't have an account? Sign up now