Sizing Up Servers: Intel's Skylake-SP Xeon versus AMD's EPYC 7000 - The Server CPU Battle of the Decade?
by Johan De Gelas & Ian Cutress on July 11, 2017 12:15 PM EST- Posted in
- CPUs
- AMD
- Intel
- Xeon
- Enterprise
- Skylake
- Zen
- Naples
- Skylake-SP
- EPYC
SMT Integer Performance With SPEC CPU2006
Next, to test the performance impact of simultaneous multithreading (SMT) on a single core, we test with two threads on the same core. This way we can evaluate how well the core handles SMT.
Subtest | Application type | Xeon E5-2690 @ 3.8 | Xeon E5-2690 v3 @ 3.5 | Xeon E5-2699 v4 @ 3.6 | EPYC 7601 @3.2 | Xeon 8176 @ 3.8 |
400.perlbench | Spam filter | 39.8 | 43.9 | 47.2 | 40.6 | 55.2 |
401.bzip2 | Compression | 32.6 | 32.3 | 32.8 | 33.9 | 34.8 |
403.gcc | Compiling | 40.7 | 43.8 | 32.5 | 41.6 | 32.1 |
429.mcf | Vehicle scheduling | 44.7 | 51.3 | 55.8 | 44.2 | 56.6 |
445.gobmk | Game AI | 36.6 | 35.9 | 38.1 | 36.4 | 39.4 |
456.hmmer | Protein seq. analyses | 32.5 | 34.1 | 40.9 | 34.9 | 44.3 |
458.sjeng | Chess | 36.4 | 36.9 | 39.5 | 36 | 41.9 |
462.libquantum | Quantum sim | 75 | 73.4 | 89 | 89.2 | 91.7 |
464.h264ref | Video encoding | 52.4 | 58.2 | 58.5 | 56.1 | 75.3 |
471.omnetpp | Network sim | 25.4 | 30.4 | 48.5 | 26.6 | 42.1 |
473.astar | Pathfinding | 31.4 | 33.6 | 36.6 | 29 | 37.5 |
483.xalancbmk | XML processing | 43.7 | 53.7 | 78.2 | 37.8 | 78 |
Now on a percentage basis versus the single-threaded results, so that we can see how much performance we gained from enabling SMT:
Subtest | Application type | Xeon E5-2699 v4 @ 3.6 | EPYC 7601 @3.2 | Xeon 8176 @ 3.8 |
400.perlbench | Spam filter | 109% | 131% | 110% |
401.bzip2 | Compression | 137% | 141% | 128% |
403.gcc | Compiling | 137% | 119% | 131% |
429.mcf | Vehicle scheduling | 125% | 110% | 131% |
445.gobmk | Game AI | 125% | 150% | 127% |
456.hmmer | Protein seq. analyses | 127% | 125% | 125% |
458.sjeng | Chess | 120% | 151% | 125% |
462.libquantum | Quantum sim | 91% | 129% | 90% |
464.h264ref | Video encoding | 101% | 112% | 112% |
471.omnetpp | Network sim | 109% | 116% | 103% |
473.astar | Pathfinding | 140% | 149% | 137% |
483.xalancbmk | XML processing | 120% | 107% | 116% |
On average, both Xeons pick up about 20% due to SMT (Hyperthreading). The EPYC 7601 improved by even more: it gets a 28% boost on average. There are many possible explanations for this, but two are the most likely. In the situation where AMD's single threaded IPC is very low because it is waiting on the high latency of a further away L3-cache (>8 MB), a second thread makes sure that the CPU resources can be put to better use (like compression, the network sim). Secondly, we saw that AMD core is capable of extracting more memory bandwidth in lightly threaded scenarios. This might help in the benchmarks that stress the DRAM (like video encoding, quantum sim).
Nevertheless, kudos to the AMD engineers. Their first SMT implementation is very well done and offers a tangible throughput increase.
219 Comments
View All Comments
TheOriginalTyan - Tuesday, July 11, 2017 - link
Another nicely written article. This is going to be a very interesting next couple of months.coder543 - Tuesday, July 11, 2017 - link
I'm curious about the database benchmarks. It sounds like the database is tiny enough to fit into L3? That seems like a... poor benchmark. Real world databases are gigabytes _at best_, and AMD's higher DRAM bandwidth would likely play to their favor in that scenario. It would be interesting to see different sizes of transactional databases tested, as well as some NoSQL databases.psychobriggsy - Tuesday, July 11, 2017 - link
I wrote stuff about the active part of a larger database, but someone's put a terrible spam blocker on the comments system.Regardless, if you're buying 64C systems to run a DB on, you likely will have a dataset larger than L3, likely using a lot of the actual RAM in the system.
roybotnik - Wednesday, July 12, 2017 - link
Yea... we use about 120GB of RAM on the production DB that runs our primary user-facing app. The benchmark here is useless.haplo602 - Thursday, July 13, 2017 - link
I do hope they elaborate on the DB benchmarks a bit more or do a separate article on it. Since this is a CPU article, I can see the point of using a small DB to fit into the cache, however that is useless as an actual DB test. It's more an int/IO test.I'd love to see a larger DB tested that can fit into the DRAM but is larger than available caches (32GB maybe ?).
ddriver - Tuesday, July 11, 2017 - link
We don't care about real world workloads here. We care about making intel look good. Well... at this point it is pretty much damage control. So let's lie to people that intel is at least better in one thing.Let me guess, the databse size was carefully chosen to NOT fit in a ryzen module's cache, but small enough to fit in intel's monolithic die cache?
Brought to you by the self proclaimed "Most Trusted in Tech Since 1997" LOL
Ian Cutress - Tuesday, July 11, 2017 - link
I'm getting tweets saying this is a severely pro AMD piece. You are saying it's anti-AMD. ¯\_(ツ)_/¯ddriver - Tuesday, July 11, 2017 - link
Well, it is hard to please intel fanboys regardless of how much bias you give intel, considering the numbers.I did not see you deny my guess on the database size, so presumably it is correct then?
ddriver - Tuesday, July 11, 2017 - link
In the multicore 464.h264ref test we have 2670 vs 2680 for the xeon and epyc respectively. Considering that the epyc score is mathematically higher, howdoes it yield a negative zero?Granted, the difference is a mere 0.3% advantage for epyc, but it is still a positive number.
Headley - Friday, July 14, 2017 - link
I thought the exact same thing