The Magic Inside the Uncore

We were already been spoiled by Ivy Bridge EP as it implemented a pretty complex uncore architecture. With Haswell EP, the communication between memory controllers, LLC, and cores has become even more intricate.

The Sandy Bridge EP CPU consisted of two columns of cores and LLC slices, connected by a single ring bus. The top models of the Ivy Bridge EP had three columns connected by a dual ring bus, with outer and inner rings as pictured above. The rings move data in opposite directions (clockwise/counter-clockwise) in order to reduce latency by allowing data to take the shortest path to the destination. As data is brought onto the ring infrastructure, it must be scheduled so that it does not collide with previous data.

The 14 and 18 core SKUs now have four columns of cores and LLC slices, and as a result scheduling gets very complicated. Intel has now segregated the dual ring buses and integrated two buffered switches to simplify scheduling. It's somewhat comparable with the way an Ethernet switch divides a network into segments. Each ring can act independently, and as result the effective bandwidth increases, which is especially helpful when FMA/AVX instructions are working on 256-bit chunks of data.

In total there are now three different die configurations. The first one, from four up to eight cores, is very similar to the lower count Ivy Bridge EPs. It has one dual ring, two columns of cores, and only one memory controller. The LLC cache is smaller on this die and has a lower latency.

The second configuration supports 10-12 cores and is a smaller version of the third die configuration that we described above. These dies have two memory controllers. The blue points indicate where data can jump onto the ring buses. Note that the die configurations are not symmetrical. For example an 18-core CPU has 8 cores (4-4) and 20MB LLC on one side, with 25MB LLC and 10 cores on the other. The middle configuration drops six to eight of the cores on the right ring, with an associated amount of LLC.

Data/instructions of one core are not stored in the adjacent cache slice. This could have lowered latency in some cases but it can create hotspots. Data is stored based on the physical address, ensuring all LCC cache slices are uniformly accessed. Transactions take the shortest path.

Rings are one of the entities that work on a separate voltage and frequency, just like cores. So if more I/O or coherency messaging is going on than processing, power can be dynamically allocated to speed up the rings.

Cache Coherency

The Home Agents are used for cache coherency and requests to DRAM. In dies that have two memory controllers, each home agent will use two channels. In dies that have one memory controller, each home agent will address four channels. While the smaller dies have faster LLC caches, Intel estimates that the second memory controller will extract 5% to 10% more bandwidth.

The two socket Haswell EP supports three snooping modes as you can see below. The first, Early Snoop, was available starting with Sandy Bridge EP models. With Ivy Bridge EP a second mode, Home Snoop, was introduced. Haswell EP now adds a third mode, Cluster on Die.

These snoop modes can be set in the BIOS.

Ivy Bridge used home snooping and had a directory in memory. The latest Xeon has directory caches (about 14KB) in each Home Agent. This directory cache keeps track of the contested cache lines to lower cache-to-cache transfer latencies. Another result is that directory updates in memory are less frequent and there are less broadcast snoops. Cluster On Die mode is the latest addition to the coherency protocols.

Cluster On Die can be understood as if you split the CPU and LLC into two parts that behave like two CPUs in NUMA. The OS is presented two affinity domains. As a result, the latency of LLC is lowered, but the hitrate is slightly lower. However if your application is NUMA aware, data and instructions are kept close to the part of the CPU that is processing them.

Higher QPI speeds, also notice the "COD" and "Early snoop" option.

And finally, QPI has been sped up to 9.6 GT/s, from 8 GT/s (as you can see in the BIOS shot).

More improvements

The list of (small) improvements is long and we have not been able to test all of them out. But here is an overview of what also improved

  • Lower VM Entry/exit latency. The latency of going and forth to the Hypervisor has been improved compared to Westmere. Sandy Bridge slightly increase this compared to Westmere. 
  • VMCS shadowing. De VM Control Structure can be exposed to hypervisors running on top of the main hypervisor. So you get VT-x inside your nested hypervisor
  • EPT Access and Dirty Bits. This makes it easier to move memory pages around, which is essiential for Live Migration / vMotion
  • Cache monitoring (CMT) & allocation technology (CAT). CMT allow you to "measure" if a certain Virtual machine hogs the LLC . In certain SKUs is possible have control over the placement of data in the last-level cache. 

Most of the improvements listed are specific for virtualized servers. However, cache allocation monitoring is also available for "native" OS.

Next Stop: the Uncore Power Optimizations
POST A COMMENT

85 Comments

View All Comments

  • MorinMoss - Friday, August 9, 2019 - link

    Hello from 2019.
    AMD has a LOT of ground to make up but it's a new world and a new race
    https://www.anandtech.com/show/14605/the-and-ryzen...
    Reply
  • Kevin G - Monday, September 8, 2014 - link

    As an owner of a dual Opteron 6376 system, I shudder at how far behind that platform is. Then I look down and see that I have both of my kidneys as I didn't need to sell one for a pair of Xeons so I don't feel so bad. For the price of one E5-2660v3 I was able to pick up two Opteron 6376's. Reply
  • wallysb01 - Monday, September 8, 2014 - link

    But the rest of the system cost is about the same. So you get 1/2 the performance for a 10% discount. YEPPY! Reply
  • Kevin G - Monday, September 8, 2014 - link

    Nope. Build price after all the upgrades over the course of two years is some where around $3600 USD. The two Opterons accounted for a bit more than a third of that price. Not bad for 32 cores and 128 GB of memory. Even with Haswell-E being twice as fast, I'd have to spend nearly twice as much (CPU's cost twice as much as does DDR4 compared to when I bought my DDR3 memory). To put it into prespective, a single Xeon E5 2999v3 might be faster than my build but I was able to build an entire system for less than the price Intel's flagship server CPU.

    I will say something odd - component prices have increased since I purchased parts. RAM prices have gone up by 50% and the motherboard I use has seemingly increased in price by $100 due to scarcity. Enthusiast video card prices have also gotten crazy over the past couple of years so a high end video card is $100 more for top of the line in the consumer space.
    Reply
  • wallysb01 - Tuesday, September 9, 2014 - link

    Going to the E5 2699 isn’t needed. A pair of 2660 v3s is probably going to be nearly 2x as fast the 6376, especially for floating point where your 32 cores are more like 16 cores or for jobs that can’t use very many threads. True a pair of 2660s will be twice as expensive. On a total system it would add about $1.5K. We’ll have to wait for the workstation slanted view, but for an extra $1.5K, you’d probably have a workstation that’s much better at most tasks. Reply
  • Kevin G - Friday, September 12, 2014 - link

    Actually if you're aiming to double the performance of a dual Opteron 6376, two E5-2695v3's look to be a good pick for that target according to this review. A pair of those will set you pack $4848 which is more than what my complete system build cost.

    Processors are only one component. So while a dual Xeon E5-2695v3 system would be twice as fast, total system cost is also approaching double due to memory and motherboard pricing differences.
    Reply
  • Kahenraz - Monday, September 8, 2014 - link

    I'm running a 6376 server as well and, although I too yearn for improved single-threaded performance, I could actually afford to own this one. As delicious as these Intel processors are, they are not priced for us mere mortals.

    From a price/performance standpoint, I would still build another Opteron server unless I knew that single-threaded performance was critical.
    Reply
  • JDG1980 - Tuesday, September 9, 2014 - link

    The E5-2630 v3 is cheaper than the Opteron 6376 and I would be very surprised if it didn't offer better performance. Reply
  • Kahenraz - Tuesday, September 9, 2014 - link

    6376s can be had very cheaply on the second-hand market, especially bundled with a motherboard. Additionally, the E5-2630 v3 requires both a premium on the board and DDR4 memory.

    I'd wager you could still build an Opteron 6376 system for half or less.
    Reply
  • Kevin G - Tuesday, September 9, 2014 - link

    It'd only be fair to go with the second hand market for the E5-2630v3's but being new means they don't exist. :)

    Still going by new prices, an Opteron 6376 will be cheaper but roughly 33% from what I can tell. You're correct that the new Xeon's have a premium pricing on motherboards and DDR4 memory.
    Reply

Log in

Don't have an account? Sign up now