Maxwell: Designed For Energy Efficiency

While Maxwell doesn’t come with a significant overhaul of its high level feature set, the same cannot be said for the low level design of Maxwell. In fact the consistency at a high level betrays just how much work NVIDIA has done under the hood in order to improve their efficiency for Maxwell. Maxwell isn’t a complete overhaul of NVIDIA’s designs, nor is it even as aggressive as Kepler was when it eliminated Fermi’s hot clocks in favor of a wider design, but it has a number of changes that are important to understanding the architecture and more importantly understanding how NVIDIA is achieving their efficiency goals.

Broadly speaking, with Maxwell NVIDIA is almost solely focused on improving energy efficiency and performance per watt. This extends directly from NVIDIA’s mobile first design strategy for Maxwell, where the company needs to maximize energy efficiency in order to compete and win within the mobile space. If NVIDIA can bring down their energy consumption, then due to the power limiting factor we mentioned earlier they can use that recovered power overhead to further improve their performance. This again being especially noticeable in SoC-class products and discrete mobile due to the low power budgets these platforms provide.

To a lesser extent NVIDIA is also focused on space efficiency. GPU production costs and space efficiency go hand-in-hand, so there’s an interest in improving the density of their designs with Maxwell. This is especially the case when the earlier power savings allow for a wider GPU with a larger number of functional units within the same power envelope. Denser designs allow for NVIDIA to offer similar performance as larger Kepler GPUs (e.g. GK106) with a smaller Maxwell GPU.

To achieve this NVIDIA has taken a number of steps, some of which they’ve shared with us at a high level and some of which they haven’t. NVIDIA is taking a bit of a “secret sauce” approach to Maxwell from a design level, so while we know a fair bit about its execution model we don’t know quite as much about the little changes that add up to Maxwell’s energy and space savings. However NVIDIA tells us that overall they’ve been able to outright double their performance-per-watt on Maxwell versus Kepler, which is nothing short of amazing given the fact that all of this is being done on the same 28nm process as Kepler.

We’ll go over execution flow and the other gritty details on the next page, but for now let’s start with a look at NVIDIA’s Streaming Multiprocessor designs for Kepler (SMX) and Maxwell (SMM).

Immediately we can see a significant difference in the layout between the SMX and the new SMM. Whereas the SMX was for all practical purposes a large, flat design with 4 warp schedulers and 15 different execution blocks, the SMM has been heavily partitioned. Physically each SMM is still one contiguous unit, not really all that different from an SMX. But logically the execution blocks which each warp scheduler can access have been greatly curtailed.

The end result is that in an SMX the 4 warp schedulers would share most of their execution resources and work out which warp was on which execution resource for any given cycle. But on an SMM, the warp schedulers are removed from each other and given complete dominion over a far smaller collection of execution resources. No longer do warp schedulers have to share FP32 CUDA cores, special function units, or load/store units, as each of those is replicated across each partition. Only texture units and FP64 CUDA cores are shared.

Among the changes NVIDIA made to reduce power consumption, this is among the greatest. Shared resources, though extremely useful when you have the workloads to fill them, do have drawbacks. They’re wasting space and power if not fed, the crossbar to connect all of them is not particularly cheap on a power or area basis, and there is additional scheduling overhead from having to coordinate the actions of those warp schedulers. By forgoing the shared resources NVIDIA loses out on some of the performance benefits from the design, but what they gain in power and space efficiency more than makes up for it.

NVIDIA hasn’t given us hard numbers on SMM power efficiency, but for space efficiency a single 128 CUDA core SMM can deliver 90% of the performance of a 192 CUDA core SMX at a much smaller size.

Moving on, along with the SMM layout changes NVIDIA has also made a number of small tweaks to improve the IPC of the GPU. The scheduler has been rewritten to avoid stalls and otherwise behave more intelligently. Furthermore by achieving higher utilization of their existing hardware, NVIDIA doesn’t need as many functional units to hit their desired performance targets, which in turn saves on space and ultimately power consumption.

While on the subject of performance efficiency, NVIDIA has also been working on memory efficiency too. From a performance perspective GDDR5 is very powerful, however it’s also very power hungry, especially in comparison to DDR3. With GM107 in particular being a 128-bit design that would need to compete with the likes of the 192-bit GK106, NVIDIA has massively increased the amount of L2 cache they use, from 256KB in GK107 to 2MB on GM107. This reduces the amount of traffic that needs to cross the memory bus, reducing both the power spent on the memory bus and the need for a larger memory bus altogether.

Increasing the amount of cache always represents an interesting tradeoff since cache is something of a known quantity and is rather dense, but it’s only useful if there are memory stalls or other memory operations that it can cover. Consequently we often see cache implemented in relation to whether there are any other optimizations available. In some cases it makes more sense to use the transistors to build more functional units, and in other cases it makes sense to build the cache. After staying relatively stagnant on their cache sizes for so long, it looks like the balance has finally shifted and the cache increase makes the most sense for NVIDIA.

Of course even these changes are relatively high level from an ASIC perspective. There’s always the possibility for low-level changes and NVIDIA has followed through on these too. Case in point, both NVIDIA and AMD have been steadily improving their clock gating capabilities, and with Maxwell NVIDIA has taken another step in their designs. NVIDIA isn’t telling us just how fine grained their gating is now for Maxwell, but it’s a finer granularity than it was on Kepler. Given the new SM design, the most likely change was likely the ability to control the individual partitions and/or the functional units within those partitions, but this is just supposition on our part.

Finally there’s the lowest of low level optimizations, which is transistor level optimizations. Again NVIDIA hasn’t provided a ton of details here, but they tell us they’ve gone through at the transistor level to squeeze out additional energy efficiency as they could find it. Given that TSMC 28nm is now a very mature process with well understood abilities and quirks, NVIDIA should be able to design and build their circuits to a tighter tolerance now than they would have been able to when working on GK107 over 2 years ago.

Maxwell’s Feature Set: Kepler Refined GeForce GTX 750 Ti & GTX 750 Specifications & Positioning
Comments Locked

177 Comments

View All Comments

  • EdgeOfDetroit - Tuesday, February 18, 2014 - link

    The EVGAs have Displayport, but they might be the only ones. I ordered the Superclocked 750 Ti with the $5 rebate from Newegg because it had a DisplayPort and the competitors did not.
  • Death666Angel - Tuesday, February 18, 2014 - link

    "the 760 has been out for almost a year now and is an older process" -> Still the same 28nm process for the 760 and 750 alike. :)
  • MrPoletski - Tuesday, February 18, 2014 - link

    This jump in cache for 128k to 2mb... I wonder what that does for cryptocurrency mining?
  • The Von Matrices - Tuesday, February 18, 2014 - link

    Unless the integer shift operation has been improved, not much.
  • g101 - Tuesday, February 18, 2014 - link

    Nothing, nividia is fundamentally deficient with integer compute, these are architectural decisions that NVidia made in hopes of squeezing out slightly better FPS. Think: anti-gpgpu, or more of a classic asic.

    So no, this arc isn't going to change their position with regards to the actual algorithms. Perhaps there will be a moderate increase in sCrypt sha2 performance (due to the memory-hard nature of that implementation), however, nvidia's extreme (and sometimes intentional) incompetence with gpgpu leads me to believe that they still do not understand that GPGPU is the reason AMD's cards are above MSRP. It's not due to one specific hashing function, it's due to their superiority in over 11 specific functions, superior general opencl performance and comparatively greater performance for many SP compute intensive CUDA applications. For instance, cross-comparison between cuda and opencl raycasting yields some very interesting results, with the opencl/AMD solutions outperforming cuda 2:1, often with greater accuracy.

    CUDA is easy, NVidia has zero compute advantage beyond 'ease'.
  • oleguy682 - Tuesday, February 18, 2014 - link

    AMD receives nothing for their cards being sold over MSRP. Their channel partners likely have agreements in place for this generation of processors that is locked in at a specific price or price range. Perhaps if they signed new partners, or revised their processors substantially enough to warrant a new agreement, they can take advantage of the higher-than-MSRP situation, but I doubt it. And even the ASUS and Gigabytes of the world are likely unable to capitalize much on the demand. At best, they are able to sell boards to retailers as fast as they come off the line.

    Only the Neweggs are profiting handsomely off of this.
  • HighTech4US - Wednesday, February 19, 2014 - link

    Von and g101 you are both wrong as Maxwell has now greatly improved integer compute. Check out the following review page from Tom's:

    http://www.tomshardware.com/reviews/geforce-gtx-75...

    Quote: Historically, Nvidia's cards came up short against competing Radeons, which is why you see R9 290X boards selling for $700 and up. But the Maxwell architecture's improvements allow the 60 W GeForce GTX 750 Ti to outperform the 140 W GeForce GTX 660 and approach AMD's 150 W Radeon R7 265, which just launched, still isn't available yet, but is expected to sell for the same $150. On a scale of performance (in kH/s) per watt, that puts Nvidia way out ahead of AMD. Today, four GM107-based cards in a mining rig should be able to outperform a Radeon R9 290X for less money, using less power.
  • Yojimbo - Wednesday, February 19, 2014 - link

    Which is good for NVidia, maybe just lucky. Increasing gamer market share in exchange for some short-term profits is probably a good trade-off for Nvidia. If AMD can't maintain their market share, they'll have less muscle behind their Mantle initiative.
  • hpvd - Tuesday, February 18, 2014 - link

    Does this first small Maxwell brings Support for Unified Virtual Memory Management IN HARDWARE? If yes: would be really interesting to see how efficient it could work...
    details see:
    http://www.anandtech.com/show/7515/nvidia-announce...
  • willis936 - Tuesday, February 18, 2014 - link

    I would like very much to see a comparison of GM107 in SLI to other $300 graphics card options. Us 560 Ti owners are in a tough position because it's upgradin' time and there's no decent, quiet solution. SLI is still a bit of a hack and from what I can tell can be more of a compatibility headache than a performance gain. These cards may be the exception though.

Log in

Don't have an account? Sign up now