Comments Locked

38 Comments

Back to Article

  • minidad - Thursday, December 21, 2006 - link

    Hi,

    Anandtech has done a lot of hard work here, and should be commended for this, but the methodology appears flawed. The metric of comparison between the different systems is the % cpu utilization at 6 different load points. However, if you examine the Dell DVD Store cpu utilization graphs, the CPU utilization for each load point is different for different cpus except for the two heaviest load points. They should be the same at each load point for correct comparison. In other words, when the opteron 2218 is running at 65% cpu load in load point 3, the woodcrest is running at 50%. Since the load points for the different cpus are not comparable, the conclusions of the article are unfortunately not usable.


  • Mantruch - Thursday, December 21, 2006 - link

    Woodcrests are faster? well, thats all i need to know
  • Nighteye2 - Monday, December 18, 2006 - link

    Does that version of Windows server support NUMA? It could make a significant impact on results...
  • BikeDude - Thursday, December 21, 2006 - link

    NUMA is supported on
    Windows Server 2003
    WinXP SP2
    and newer

    See reqs at:
    http://msdn.microsoft.com/library/en-us/dllproc/ba...">http://msdn.microsoft.com/library/en-us/dllproc/ba...
  • gouyou - Monday, December 18, 2006 - link

    I think it would be nice to have a test using a linux plateforme. I'm wondering if there is any performance gain for AMD using scheduling and memory management algorithms made with a NUMA set-up in mind. I guess that in some scenari we might see the opteron performance closer to the Intel one.
  • JarredWalton - Monday, December 18, 2006 - link

    The Dell test runs on Linux, while our forums benchmark runs on Windows Server 2003 x64. We will be providing additional benchmarks in the near future comparing Opteron and Xeon in other ways, so stay tuned.
  • Spacecomber - Monday, December 18, 2006 - link

    Sorry if I overlooked where this was mentioned in the article, but are these comparable systems comparably priced?
  • Nehemoth - Monday, December 18, 2006 - link

    Why you don't include information test for Terminal services, for example in out company with have plans to migrate from an old version of Citrix Metaframe to the Windows 2003 server terminal services.
    And don't care much about the power consumption (in out country the electricity bill is always high not matters what) but i do care much about the upgrade path, for example :
    (And taking in mind the HP solutions , DL365 opteron VS DL380G5)

    1-If i choose Opteron over Woodcrest will be easy or more cheap to buy more memory next year end?

    2-What about Quad Core, i know that i can buy woodcrest QC now but it will become conductible this upgrade concerning the bus of intel or should i see beyond to opteron QC (anyway for an upgrade for a system bought it in january 2007 shall be are less january 2008).

    These are the things that matters to me right now and i hope that AT answer those question sooner than later.


  • Nehemoth - Monday, December 18, 2006 - link

    HP has curious quad core upgrade path
    http://www.theinquirer.net/default.aspx?article=36...">http://www.theinquirer.net/default.aspx?article=36...
  • mino - Monday, December 18, 2006 - link

    As for upgrage path, go AMD.

    While Woodcrest is usually a bit better than AMD, K8L will be better in allmost every aspect to Clovertown.

    Also I doubt 45nm Penryn-derived 4C Xeons will be compatible with current platforms.

    As of now I would go for some serious 16DIMM board with cheaper DC like 2214. And plan upgrade in Q407 or Q108 to K8L.
  • proteus7 - Monday, December 18, 2006 - link

    Not sure how the conclusion that Socket-F wins is reached.
    True performance benchmarking can only occur with CPU as close as possible to 100%, and NO other benchmarks in the system. For a TPC-C style OLTP database workload for example, usually about 400+ HDDs would be required on a 4-core system to ensure this, and a lot more memory (16GB minimum would be realistic for 4-cores).

    In every benchmark posted, at full load, Woodcrest wins. If you try to spin "load points", "perf per watt", etc, it then muddies the waters.

    Finally, you should put a disclaimer that this test is for a very specific workload. The good news is that there was no deliberate attempt to skew the results towards AMD. If so, you would have picked a large OLAP, DSS, or Data Warehouse type workload, which take better advantage of Socket-Fs superior memory latency on out-of-cache workloads.


  • JarredWalton - Monday, December 18, 2006 - link

    Actually, Woodcrest isn't the clear winner. It is faster, at a higher power draw, and there are reportedly situations where Opteron will still come out with a (sometimes significant) lead. We didn't test such situations here, but a "clear win" would be what we see on the desktop where Core 2 Duo is typically faster than any X2 processor while using less power - although the power situation is still somewhat up for debate.

    I know I worked in a data center for several years where we had at least 12 servers. I don't think any of those servers was running at more than about 25% capacity, so there are definitely companies that aren't going to care too much about performance at maximum load. Of course, it's kind of funny that the data center I worked at was a 100% Dell shop (at least for desktops, laptops, and non-UNIX servers), so all of the servers were running the Xeon DP/MP processors during a time where Opteron was clearly providing better performance and lower power requirements.
  • mlittl3 - Monday, December 18, 2006 - link

    Jarred,

    I think the problem with many of these comments questioning the conclusions of the article is that not many people understand how enterprise level workstation and servers are bought and used. It would be nice if Anandtech did an article about some company that uses a lot of workstation and servers and go through their thought process when it comes to what hardware to buy. Then the article could go into how the servers manage workloads and what factors are important (performance, power, stability, etc.).

    Too many readers here think that a server cluster is bought as soon as new hardware is released and that these enterprise level IT professionals go to hardware review sites and see which hardware has a better 3dmark score. This is of course not the case.
  • Strunf - Tuesday, December 19, 2006 - link

    I think it’s pretty common knowledge how companies get their systems, I mean after Intel owning so much with the crap they had, it’s pretty obvious that performance and power consumption are secondary... but articles have to stay objective because no one knows what the deals between companies and the OEMs really “hide”...

    “Too many readers here think that a server cluster is bought as soon as new hardware is released...”
    Actually I would say many server clusters are bought at the same time new hardware is released...
  • mino - Wednesday, December 20, 2006 - link

    "Actually I would say many server clusters are bought at the same time new hardware is released..."
    Yeah, they are. The purchase is mostly waiting for the certain speed-bump in architecture/design to appear...
  • chucky2 - Thursday, December 21, 2006 - link

    I work in IT as a PM at what is the probably now the largest telecomm in the US - not on a standards committee, purchaser, or operations person who actually sets the hardware up - and at least for the boxes that host the services my org cares for, I know the hardware folks don't like to see any one of them go over 50% CPU or RAM utilization, and that's simply because of failover.

    If a machine in a cluster goes down, the other machine(s) are expected to pick up that load and not incurr downtime...downtime is bad. You could have .003% downtime <i>just for one small but main part of IT (like mine) </i> for a <b>quarter</b> and that might equate into a million dollars lost.

    Hardware is not necessairly ordered when new hardware is released...in fact, that's more than likely not the case. New hardware is not necessairly tested and proven hardware. Just because the parts folks (Intel, AMD, whoever) and the vendors (IBM, Sun, HP, whoever) are selling it, doesn't mean it stable, or at least proven to work for what a company is going to use it for. When the expectation is 100% uptime except for maintenance periods, you want your standards folks to have tested the stuff everyone in the company will be allowed to order...or at least have them looked over the changes to whatever the company standard is and say, OK, that's an acceptable upgrade, we're comfortable with that change.

    So, No, ordering the latest and greatest hardware when it comes out it not really the smart way to do things when you're talking about reliability...and the same goes for software...putting on the latest AIX or Oracle or JRE patch/version almost never happens. It's the same thing there, unless there's an absolute need for that specifc hardware/software, then you go with tried and true, because that's what's delivering <i>for sure</i>.

    The above is most likely why AMD had such a hard time breaking into the Enterprise sector...they had to prove that their hardware could get the job done as reliably as Intel, Sun, and IBM. Now that they have, hopefully the major Enterprise folks will give them more consideration...with as good as Operteron has been, they deserve it.

    Chuck
  • LoneWolf15 - Monday, December 18, 2006 - link

    Are pages three and four supposed to have tables/graphs? I'm getting two paragraphs of text on each page using Firefox 2.0, and that's it. Seems like there'd be more under your testing methodology.
  • Jason Clark - Monday, December 18, 2006 - link

    Yep, I was fixing something in the article and juggling pages around. Should be ok now.
  • LoneWolf15 - Monday, December 18, 2006 - link

    Still doesn't look right. Paragraph formatting is really off, and there's a couple of HTML tags showing. Makes it kind of hard to read.
  • LoneWolf15 - Monday, December 18, 2006 - link

    That's on Page 3 btw. Page 4 now looks fine.
  • JarredWalton - Monday, December 18, 2006 - link

    Fixed 3 as well, thanks.
  • MartinT - Monday, December 18, 2006 - link

    I don't like tests that rely on one tested party to supply both their own and their competitor's systems. Those situations are prone to favorable choice of components and outright manipulation much beyond the BIOS settings you claim to have checked.

    The very least you could do would be to ask for the competitor to supply their own system for comparison.

    Also, while I realize that AMD is kinda desperate to find any advantage, their current "Best CPU at doing nothing."-push seems rather convoluted, IMHO.
  • JarredWalton - Monday, December 18, 2006 - link

    If you ask each vendor to supply a system, you will never get anywhere near "equivalent" configurations. The purpose of this article is to show that there are a lot of companies that will be fine with their current Opteron systems, and if you are more interested in saving power (because you know your server won't be run at capacity) Opteron does very well. Obviously, there are plenty of areas where Woodcrest (and now Clovertown) are better, and we've covered some of those areas in the past.

    What server is best? That depends largely on the intended use, which is hardly surprising. I've heard that Opterons do especially well in virtual server environments, for example, easily surpassing Intel's current best. I'd love to see some concrete, independent testing of that sort of thing, but obviously figuring out exactly how to benchmark such servers is difficult at best.
  • MartinT - Monday, December 18, 2006 - link

    I'm not sure you understood my point, which was that by sourcing an Intel system from AMD, AMD had full control over not just their own system, but its competitor, too, down to even the specific CPUs they sent.

    Now that wouldn't be too bad if it was a performance test, these hardly vary much amongst samples from the same product lines, but as power consumption enters the mix, and in fact takes center-stage here, system choices become paramount to the outcome.

    Maybe I'm too big a cynic, and maybe what I allege is far from true, but under the specific circumstances of this review, I suspect that AMD's competitive performance analysis team played a major role in what hardware actually ended up in your hands.
    (i.e. Not just are the memory configs and motherboards probably carefully chosen to support the intended message, the Opteron and Xeon CPUs might also have been sampled accordingly. And from your conclusion, they've done their job well, apparently!)

    Would an off-the-shelf Opteron system produce the same results your review unit did? I don't know. Would the outcome have changed if the Intel Xeon system wasn't built to the specs of its main competitor? I don't know. But I'd be much more willing to accept the conclusion if either (a) both competitors supplied their entries themselves or (b) both units were anonymously bought from a respected OEM.

    PS: Kudos to the AMD marketing team, too, as they managed to seed at least two of these articles and so far got their message across, and only a couple of days before Christmas, too, virtually ensuring full frontpage exposure for the better part of three weeks.
  • mino - Monday, December 18, 2006 - link

    I cannot say for AT, but these numbers are reasonable and pretty much corespond to our own observations.

    Overall, the review says the two - Opteron 2000 and Xeon 5100 are pretty evenly matched. And AFAIK this is the opinion of pretty much every serious IT magizine or preffesional.

    BTW we had IBM tech guys on visit and they had similar view of the situation. 5100 slightly better clock/clock to 2000 in most generla tasks while Opteron ruling the roost on heavily loaded virtualized machines.

    From the long-term perspective IMHO Opteron is far better choice if only for the possible upgrade to K8L. Woodcrest platform has no such option available. And not, Clovertown is NOT a seriou contender for most workloads. It would yield even to hypothetical Quad-K8 not to mention K8L.
  • WarpNine - Monday, December 18, 2006 - link

    Please read this review ( I think same review as this one )
    http://www.techreport.com/reviews/2006q4/xeon-vs-o...">http://www.techreport.com/reviews/2006q4/xeon-vs-o...

    Very different conclusion??
  • JarredWalton - Monday, December 18, 2006 - link

    I think the reviews basically say the same thing in different ways. We are not saying Opteron is 100% the clear winner here, merely that it can still be very useful and fulfills a market need. For a lot of companies, service and support will be at least as important as power and performance, though - which is why plenty of businesses ran NetBurst servers even when Opteron was clearly faster and more power efficient. For companies that switched to Opterons, it's going to take more than a minor performance advantage (in some cases) to get them to change back. At least, that would make sense to me.

    Companies that need absolute maximum performance will of course be looking at Clovertown configurations (or perhaps even waiting for 4S quad core - is that out yet?)
  • photoguy99 - Monday, December 18, 2006 - link

    The conclusion of this article seems slanted - Did AMD suggest specifically that you look into "low end performance per watt"? Be honest, they planted the seed, right?

    1) Please post a link to the last article where AT's conclusion was overall this favorable to the top end performance loser. Please, we're waiting...

    2) Why should the Intel system not be quad-core? Just because AMD doesn't have it yet? They even work in the same socket!

    3) How can you justify saying AMD did "very well", and there's no Intel upgrade benefit unless you "routinely run your servers near capacity", when Intel quad core would have completely invalidated the results for performance per watt at nearly all levels?

    Full disclosure, no axe to grind: I have praised previous AT articles because they are usually great. I currently own AMD as my primary system.

    This article just doesn't smell right - too much vendor influence.


  • Jason Clark - Monday, December 18, 2006 - link

    Have you not read anything we've posted in the last few months?

    http://www.anandtech.com/IT/showdoc.aspx?i=2793&am...">Woodcrest article

    We've been touting performance / watt for months. You most certainly don't compare a quad core (8-way) setup to a 4-way and call that fair :) We have a Clovertown article on the way, it's going to include an 8-way socket-F system.

    Cheers
  • Lifted - Monday, December 18, 2006 - link

    I agree, to an extent. I just order some DL380G5's and they current fastest CPU point for quad core CPU's is 1.86GHz. Comparing 8 cores at 1.86 vs 4 cores at 3.0 starts to get difficult as it it really depends on the application in use on that system. Since this article seems to be more of a comparison of CPU architectures, the systems and CPU's used in the test seem appropriate. I think it's smart to wait until AMD has quad core out and compare apples to apples. The folks currently ordering quad core Intel systems (like me) would likely not be interested in a dual core Intel or AMD system as the task dictates the hardware, and with the systems I'm using quad cores in I simply don't need the speed, just more cores.
  • yyrkoon - Monday, December 18, 2006 - link

    quote:

    Some might wonder if a different - read Intel - motherboard for the Woodcrest system could have significantly altered the outcome of these tests


    After reading about FB-DIMMS, and the direct comparison to DDR2, I can not help but wonder, IF it were possible to use these Xeons with standard off the shelf DDR2, how well the Xeons would compare. Maybe this correlates with the quoted text above from your article, I do not know, as I don't know a lot about server grade equipment. Well, at least not "cutting edge" server equipment.
  • JarredWalton - Monday, December 18, 2006 - link

    FB-DIMMs definitely use more power, and there's technically nothing to prevent someone from making a dual socket Xeon board that uses DDR2 or even DDR (or DDR3, etc.) instead of FB-DIMMs. However, for now Intel has decided that FB-DIMM DDR2 is the way they're going for workstation/server platforms, so all we can do is wonder "what if...?"
  • Furen - Monday, December 18, 2006 - link

    Huh? As I understand it there IS a difference. The FB memory controller is different and the pin configuration is significantly different, as half the pins connect to the memory controller, the other half connect to the next DIMM on the same channel. Then there's also the fact that having quad-channel DDR2 would require an insane amount of traces while quad-channel FB requires roughly the same amount of traces.
  • yyrkoon - Tuesday, December 19, 2006 - link

    Speed / power difference, silly . . .
  • mino - Monday, December 18, 2006 - link

    What has the memory controller to do with the possibility of sticking 2 Woodcrests on DDR2 chipset ???
    The only thing to play is the FSB compatibility.

    BTW, NVIDIA is stepping in so such a platform is pretty much possible in 2007. SLI Quadro anyone...
  • joex444 - Monday, December 18, 2006 - link

    Interesting, not what you mom said last night. Oh, who got pwned?
  • glennpratt - Monday, December 18, 2006 - link

    He didn't say there isn't a difference, did he?
  • ltcommanderdata - Monday, December 18, 2006 - link

    I guess AMD is encouraging many sites to do this heads on comparison since Tech Report has one too with similar systems. They swapped in a pair of 2.67GHz X5355 Clovertons too which was interesting. It's good that you put in a pair of E5150s though since that's probably more comparable to the 2nd from the top 2.6GHz 2218s that AMD provided.

    What I would love to see is data points for the top of the line 2.8GHz 2220SE, to see if the power numbers are actually that much worse, and a 2.4GHz 2216HE to see if the power numbers are that much better. I'd also be interested in seeing a pair of 2.33GHz 5148 Woodcrests reviewed since I haven't seen anyone look to see how much better the LV chips are compared to regular 65W and 80W Woodcrests.

    It may not be that fair a comparison, but inclusion of the 2.67GHz X5355 Cloverton like Tech Report did would also be informative. Although, data for the 2.33Ghz E5345 Cloverton is probably more important since it still offers a 1333MHz FSB while keeping a 80W TDP of the lower parts, theoretically putting it in the sweet spot for performance/watt.

    It should probably be pointed out too that Tech Report also tried a 4x2GB configuration for the FB-DIMMs and they found they saved 22W or something, compared to a 8x1GB configuration. That's something to note for system configurators and leaves more room for future expansion too.

Log in

Don't have an account? Sign up now