Ryzen 5000 Mobile: SoC Upgrades

While the introduction page focuses mainly on the change to Zen 3 cores, AMD has explained to AnandTech that there are plenty of other changes in this update which enable both performance and efficiency, as well as battery life enhancements, for users.

From this point on I will start using the silicon codenames, such as

  • Cezanne (Ryzen 5000 Mobile with Zen 3),
  • Lucienne (Ryzen 5000 Mobile with Zen 2),
  • Renoir (Ryzen 4000 Mobile, all Zen 2),
  • Vermeer (Ryzen 5000 Desktop, all Zen 3),
  • Matisse (Ryzen 3000 Desktop, all Zen 2)

Double Cache and Unified Cache for Cezanne

To reiterate the primary SoC change for Cezanne compared to Renoir, the eight cores now have a unified cache rather than two cache segments. On top of this, the cache size has also doubled.

This is similar to what we saw on the desktop, when AMD introduced Vermeer – Vermeer with Zen 3 had a unified cache over Matisse with Zen 2. At that time, AMD was pointing to the unified cache enabling better gaming performance as it lowered the ‘effective’ latency for CPU memory requests in that combined cache region. The same thing is expected to hold true for the new Cezanne silicon in Ryzen 5000 Mobile, and will play a key part in enabling that +19% IPC increase from generation to generation.

Improved Memory Controller for Cezanne and Lucienne

One of the key metrics in mobile processors is the ability to eliminate excess power overhead, especially when transitioning from an active state to an idle state. All major silicon vendors that build laptop processors work towards enabling super-low power states for when users are idle, because it increases battery life.

A lot of users will be used to features that keep the processor cores in low power states, or the graphics, but also part of this is the interconnect fabric and the memory controller. One of the new developments for Ryzen 5000, and in both Cezanne on Zen 3 and Lucienne on Zen 2, is that AMD has enabled deeper low-power states for the memory physical layer (PHY) interface. This enables the system to save power when the memory subsystem is either not needed or in a period of low activity. This means putting the fabric and memory on its own voltage plane, but also enabling the required logic to drive it to a lower power when idle. AMD states that the low-dropout regulators (LDOs) are configured to enable this transition, and in certain circumstances, allow the PHY to be bypassed to further lower power consumption.

The tradeoff with having a part of the processor in such a low power state is the time it takes to recover from idle, which is also a metric to keep track of. AMD is stating that the design in Ryzen 5000 also enables a fast exit to full activity, meaning that the high performance modes can be entered quickly.

Also on the memory front, it would appear that AMD is doubling capacity support for both LPDDR4X and DDR4. For this generation, Cezanne systems can be enabled with up to 32 GB of LPDDR4X-4267 (68.2 GB/s), or up to 64 GB of DDR4-3200 (51.2 GB/s). The benefits of LPDDR4X are lower power and higher bandwidth, while DDR4 enables higher capacity and a potentially upgradable design.

Per-Core Voltage Control for Cezanne and Lucienne

In line with the same theme of saving power, not only should the periphery of the core be managed for idle use, but the cores should as well. In Ryzen 4000 Mobile, AMD had a system whereby each core could have a separate frequency, which saved some power, but the drawback was that all the cores were on a single voltage plane and so even if a core was idle when another one was heavily loaded, all cores were running at that top voltage. This changes with all members of the Ryzen 5000 Mobile family, as both Cezanne and Lucienne will both feature voltage control on a per-core level.

The slide from AMD shows it best – the cores running at higher frequencies get higher voltage, and the cores that are idling can reduce their voltage to save power. One of the main limits to enabling this sort of profile, aside from actually having the control to do it in the first place, is to do it fast enough for it both to count towards power consumption but also such that it is transparent to the user – the cores should still be able to come to a high voltage/high frequency state within a suitable time. AMD’s design works with operating system triggers and quality of service hooks to apply high-frequency modes in a task-based format.

On AMD’s desktop processors, we saw that the introduction of a feature called CPPC2 helped enable this, and the same is true on the mobile processors, however it took another generation to do the required design and firmware changes.

Power and Response Optimization (CPPC2) for Cezanne and Lucienne

As we accelerate into the future of computing, making the most out of each individual bit of silicon is going to matter more. This means more control, more optimization, and more specialization. For Cezanne and Lucienne, AMD is implementing several CPPC2 features first exhibited on desktop silicon to try and get the most out of the silicon design.

‘Preferred Core’ is a term used mostly on the desktop space to indicate which CPU core in the design can turbo to the highest frequency at the best power, and through a series of operating system hooks, the system will selectively run all single-threaded workloads on that core assuming no other workload is present. Previously, threads could bounce around to enable a more equal thermal distribution – AMD will now selectively keep the workload on the single core until thermal limits kick in, enabling peak performance and no extra delays from thread switching. For overclockable systems, this typically also represents the best core for boosting the frequency, which becomes relevant for Ryzen 5000 Mobile and the new HX series processors.

Another part of CPPC2 is frequency selection, which reduces the time for the transition from low-frequency to high-frequency from 30 milliseconds down to under 2 milliseconds. This equates to a 2-frame adjustment in frequency being reduced down to sub-frame adjustments. The consequences of this enables workloads that occur for shorter than 30 milliseconds can take advantage of a momentarily higher frequency and get completed quicker – it also enables the system to be more responsive to the user, not only in idle-to-immediate environments, but also in situations where power is being distributed across the SoC and those ratios are adjusting for the best performance, such as when the user is gaming. Also enabling load-to-idle transitions on the order of 2 milliseconds improves battery life by putting the processor in a lower power state both quicker and more often, such as between key presses on the keyboard.

The third part of CPPC2 is the migration away from discrete legacy power states within the operating system. With an OS that has a suitable driver (modern Windows 10 and Linux), frequency control of the processor is returned back from the OS to the processor, allowing for finer grained transitions of when performance or power saving is needed. This means that rather than deal with the several power states we used to, the processor has the full continuous spectrum of frequencies and voltages to enable, and will analyze the workflow to decide how that power is distributed (the operating system can give hints to the processor to aid in those algorithms).

GPU Improvements on Cezanne and Lucienne: Vega 8 to Vega 8+

As mentioned on the previous page, one of the criticisms leveled at this new generation of processors is that we again get Vega 8 integrated graphics, rather than something RDNA based. The main reason for this is AMD’s re-use of design in order to enable a faster time-to-market with Zen 3. The previous generation Renoir design with Zen 2 and Vega 8 was built in conjunction with Cezanne to the point that the first samples of Cezanne were back from the fab only two months after Renoir was launched.

If we look at the change in integrated graphics from the start of Ryzen Mobile. The first generation Raven Ridge was built on 14nm, had Vega11 graphics, and had a maximum frequency around 1200 MHz. The graphics in that Renoir design were built on 7nm, and despite the jump down from Vega11 to Vega8, efficiency was greatly increased and frequency had a heathy already a jump up to 1750 MHz. Another generation on to Cezanne and Lucienne, and the graphics gets another efficiency boost, enabling +350 MHz for added performance.

Part of this update is down to tweaks and minor process updates. AMD is able to control the voltage regulation better to allow for new minimums, reducing power, and has enabled a new frequency sensitive prediction model for performance. With the greater power controls on the CPU and SoC side, this means that power budget can be more readily accessible by the integrated graphics, allowing for higher peak power consumption, which also helps boost frequency.

Note that these features apply to both Cezanne and Lucienne, meaning that the Zen 2 products in the Ryzen 5000 Mobile do get a sizeable boost in graphics performance over Renoir here. Ultimately it is that 15 W market for which this update is aimed, given that the H-series (including HS and HX) are likely to be paired with discrete graphics cards.

As and when AMD decides to move from Vega to RDNA, we’re likely going to see some of the Cezanne be re-used such that we might see Zen3 + RDNA in the future, or the combined Zen 4 + GPU chip might be a full upgrade across the board. This is all speculation, but AMD’s CEO Lisa Su has stated that being able to re-use silicon designs like this is a key part of the company’s mobile processor philosophy going forward.

Security Updates in Cezanne

One of the features of Zen 3 is that it enables AMD’s latest generation of security updates. The big update in Zen 3 was the additional of Control Flow Enforcement Technology, known as CET. This is where the processor will create shadow stacks for return calls to ensure that the correct return addresses are called at the end of functions; similarly indirect branch jumps and calls are monitored and protected against should an attacker attempt to modify where an indirect branch is headed.

Both AMD and Intel have spoken about including Microsoft Pluton security in their processors, and we can confirm that neither Cezanne nor Lucienne have Pluton as part of the design. Both AMD and Intel have stated that it will be integrated ‘in the future’, which seems to suggest we may still be another generation or two away.

Process Node Updates on Cezanne and Lucienne

Perhaps one of the smaller updates this time around, but AMD has stated that both Cezanne and Lucienne use the latest intra-process node updates on N7 for these products. While both previous generation Renoir and these two use TSMC’s N7 process, over the lifecycle of the manufacturing node minor changes are made, sometimes to reduce defect density/increase yield, while others might be voltage/frequency updates enabling better efficiency or a skew towards better binning at a different frequency. Usually these additions are minor to the point of not being that noticeable, and AMD hasn’t said much beyond ‘latest enhancements’.

AMD Ryzen 9 5980HS Cezanne Review CPU Tests: Core-to-Core and Cache Latency
Comments Locked


View All Comments

  • Ptosio - Tuesday, January 26, 2021 - link

    ARM is not some magic silver bullet - MediaTech has vast experience with ARM but are their chromebook chips any way close to Apple M1? (or Zen3 for that matter?)

    And remember AMD is yet to get acess to the same TSMC process as Apple - maybe once they're on par, large part of that efficiency advantage dissapears?
  • ABR - Wednesday, January 27, 2021 - link

    AMD has K12, which Jim Keller also worked on, waiting in the wings. Most assuredly they have continued developing it. Whether it will play in the same league with M1 remains to be seen, but they also have the graphics IP to go with it so they could likely come out with a strong offering if it comes to that. Not sure what Intel will do..
  • Deicidium369 - Wednesday, January 27, 2021 - link

    ancient design, far exceeded by even 10 year old ARM designs.
  • Spunjji - Thursday, January 28, 2021 - link

    You say some really silly things
  • Spunjji - Thursday, January 28, 2021 - link

    "Apple will outclass everything x86 once they introduce their second gen silicon with much higher core count and other architectural improvements."

    I'll believe it when I see it. Their first move was far better than expected, but it doesn't come close to justifying the claims you're making here.
  • Glaurung - Saturday, January 30, 2021 - link

    M1 is Apple's replacement for ultra-low power, nominal 15w Intel chips. Later this year we will see their replacement for higher powered (35-65w) Intel chips. Nobody knows what those chips will be like yet, but it's pretty obvious they'll have 8 or 16 performance cores instead of just 4, with a similar scale up of the number of GPU cores. They'll add the ability to handle more than 16gb and two ports, and they will put it in their high end laptops and imac desktops. Potentially also on the menu would be a faster peak clock rate. That's not an "I'll believe it when I see it," that's a foregone conclusion. Also a foregone conclusion: next year they will have an even faster core with even better IPC to put in their phones, tablets, and computers.

    As of last year, Apple's chips had far better IPC and performance per watt than anything Intel or AMD could make, and they only fell short on overall performance due to only having 4 performance cores in their ultra-low power chips.

    (For the record, I use Windows. But there's no denying that Apple is utterly dominating in the contest to see who can make the fastest CPUs)
  • GeoffreyA - Sunday, January 31, 2021 - link

    Apple will release faster cores but so will AMD. And now that they've got an idea of what Apple's design is capable of, I'm pretty sure they could overtake it, if they wanted to.
  • GeoffreyA - Sunday, January 31, 2021 - link

    As much as I hate to say it, the M1 could be analogous to Core and K8 in the Netburst era. The return to lower clock speeds, higher IPC, and wider execution. Having Skylake and Sunny C. as their measure, AMD produced so and so (and brilliant stuff too, Zen 3 is). Perhaps the M1 will recalibrate the perf/watt measure, like Conroe did, like the Athlon 64 did.

    I've got a feeling, too, that ARM isn't playing the role in the M1 that people are thinking. It's possible the difference in perf/watt between Zen 3 and M1 is due not to x86 vs. ARM but rather the astonishing width of that core, as well as caches. How much juice ARM is adding, I doubt whether we can say, unless the other components were similar. My belief, it isn't adding much.
  • Farfolomew - Thursday, February 4, 2021 - link

    Very nice comment, and this little thread is a really fascinating read. I've not thought of the comparisons of the P4 -> Core2Duo Mhz regression, but I really think you're on to something here. The thing is, this isn't anything new with M1, Apple has been doing it since the A9 back in 2015, when it finally had IPC parity with the Core M chips. The M1 is just the evolution and scaling up to that of an equivalent TDP laptop chip that Intel has been producing.

    So the question, then, is, if it's not the "ARM" architecture giving the huge advantages, why haven't we seen a radical shift in the x86 technology back to ultra wide cores, and caches? Or maybe we are, incrementally, with Ice/Tiger Lake, and Zen 2/3/4?

    Very fascinating times!
  • GeoffreyA - Sunday, February 7, 2021 - link

    "Or maybe we are, incrementally, with Ice/Tiger Lake, and Zen 2/3/4?"

    I think that sums it up. As to why their scaling is going at a slower rate, there are a few possible explanations. Likely, truth is somewhere in between.

    Firstly, AMD and Intel have aimed for high-frequency designs, which is at loggerheads with widening of a core. Then, AMD has been targeting Haswell (and later) perf/watt with Zen. When one's measure is such, one won't go much beyond that (Zen 2 and 3 did, but there's still juice in the tank). Lastly, it could be owing to the main bottleneck in x86: the variable-length instructions, which make parallel decoding difficult. Adding more decoders helps but causes power to go up. So the front end could be limiting how much we can widen our resources down the line.

    Having said that, I still think that AMD's ~15% IPC increase each year has been impressive. "The stuff of legend." Intel, back when it was leading, had us believe such gains were impossible. It's going to be some interesting years ahead, watching the directions Intel, Apple, and AMD take. I'm confident AMD will keep up the good work.

Log in

Don't have an account? Sign up now