One of the biggest announcements from Huawei this year is that of its new GPU Turbo technology. The claims that it could provide more performance at less power, without a hardware change gave us quite a bit of pause. Internally, more than a few raised eyebrows appeared. As part of our discussions with Huawei this year at IFA, as well as some pretesting, we actually now have a base understanding of the technology, as well as additional insight into some of the marketing tactics – not all of which are the most honest representations of the new feature.

GPU Turbo: A Timeline

GPU Turbo is said to be a new mechanism that promised great performance and power improvements to new and existing devices. The new ‘technology’ was something that was first introduced in early June with the Chinese release of the Honor Play, and will be updated to a version ‘2.0’ with the launch of EMUI 9.0 later in the year.

Over the next few months Huawei plans to release the technology on all of its mainstream devices, as well as going back through its catalogue. Huawei promises that all devices, irrespective of hardware, will be able to benefit.

GPU Turbo Rollout
Huawei   Honor
Mate 10
Mate 10 Pro
Mate 10 Porsche Design
Mate RS Porsche Design
P20
P20 Pro
July/
August
 
Honor 10
Honor Play
Honor 9 lite
P20 lite
P smart
Nova 2i
Mate 10 lite
Y9
September Honor View 10
Honor 9
- October -
Mate 9
Mate 9 Pro
P10
P10 Plus
November Honor 8 Pro
  December Honor 7x

From the weeks following the release of GPU Turbo on the first few devices, we saw quite a lot of hype and marketing efforts on Honor and Huawei’s side with the goal of promoting GPU Turbo. Over all the presentations, press releases promoted articles, and fuzzy press analysis, one important thing was consistently missing: we saw no technical explanation as to what GPU Turbo actually is and how it works. Everything was about results, but nothing was about details. At AnandTech, it’s the details that really resonate in our understanding, as to whether a new feature is genuine or not.

Huawei, to its credit, did try to reach out to us, but neither the company nor PR ever really responded when we asked for a more technical briefing on the new mechanism. We’re not sure what the reason was for this, as historically the company has often been open to technical discussions. On the plus side, at this year’s IFA, we finally had the chance to meet with a team of Huawei’s hardware and software engineers/managers.

Through these discussions, we developed some detailed explanations that finally made more sense of the past month’s marketing claims. The plus side of this is that we now have a better understanding of what GPU Turbo actually does (and it makes sense), although it also puts a chunk of the marketing slides on the ignore pile.

In this first piece on GPU Turbo, we’re going to go through the story of the feature in stages. First, we’ll look at Huawei’s initial claims about the technology: specifically the numbers. Second, we’ll go deeper into what GPU Turbo actually does. Third, we examine the devices we do have with the technology, to see what differences we can observe, and finally we address the marketing, which really needs to be scrutinized.

It should be noted that time permitting, and resources permitting, we want to go deeper into GPU Turbo. With the upcoming launch of the Mate 20 and the new Kirin 980 SoC inside, we will want to do a more detailed analysis with more data. This is only the beginning of the story into GPU Turbo.

The Claimed Benefits of GPU Turbo: Huawei’s Figures
POST A COMMENT

64 Comments

View All Comments

  • eastcoast_pete - Tuesday, September 4, 2018 - link

    Thanks Andrei! I agree that this is, in principle, an interesting way to adjust power use and GPU performance in a finer-grained way than otherwise implemented. IMO, it also seems to be an attempt to push HiSlilicon's AI core, as its other benefits are a bit more hidden for now (for lack of a better word). Today's power modes (at least on Android) are a bit all-high or all-low, so anything finer grained is welcome. Question: how long can the "turbo" turbo for before it gets a bit warm for the SoC? Did Huawei say anything about thermal limitations? I assume the AI is adjusting according to outside temperature and SoC to outside temperature differential?

    Regardless of AI-supported or not, I frequently wish I could more finely adjust power profiles for CPU, GPU and memory and make choices for my phone myself, along the lines of: 1. Strong, short CPU and GPU bursts enabled, otherwise balanced, to account for thermals and battery use (most everyday use, no gaming), 2. No burst, energy saver all round (need to watch my battery use) and 3. High power mode limited only by thermals (gaming mode), but allows to vary power allocations to CPU and GPU cores. An intelligent management and power allocation would be great for all these, but especially 3.
    Reply
  • Ian Cutress - Tuesday, September 4, 2018 - link

    GPU Turbo also has a CPU mode, if there isn't an NPU present. That's enabling Huawei to roll it out to older devices. The NPU does make it more efficient though.

    In your mode 3, battery life is still a concern. Pushing the power causes the efficiency to decrease as the hardware is pushed to the edge of its capabilities. The question is how much of a trade off is valid? Thermals can also ramp a lot too - you'll hit thermal skin temp limits a lot earlier than you think. That also comes down to efficiency and design.
    Reply
  • kb9fcc - Tuesday, September 4, 2018 - link

    Sounds reminiscent of the days when nVidia and ATI would cook some code into their drivers that could detect when certain games and/or benchmarking tools were being run and tweak the performance to return results that favored their GPU. Reply
  • mode_13h - Tuesday, September 4, 2018 - link

    Who's to say Nvidia isn't already doing a variation of GPU Turbo, in their game-ready drivers? The upside is less, with a desktop GPU, but perhaps they could do things like preemptively spike the core clock speed and dip the memory clock, if they knew the next few frames would be shader-limited but with memory bandwidth to spare. Reply
  • Kvaern1 - Tuesday, September 4, 2018 - link

    I don't suppose China has a law that punishes partyboss owned corporation for making wild dishonest claims. Reply
  • darckhart - Tuesday, September 4, 2018 - link

    ehhh it's getting hype now, but I bet it will only be supported on a few games/apps. it's a bit like nvidia's game ready drivers: sure the newest big name game releases get support (but only for newer gpu) and then what happens when the game updates/patches? will the team keep the game in the library and let the AI keep testing so as to keep it optimized? how many games will be added to the library? how often? which SoC will continue to be supported? Reply
  • mode_13h - Tuesday, September 4, 2018 - link

    Of course, if they just operated a cloud service that automatically trained models based on automatically-uploaded performance data, then it could easily scale to most apps on most phones. Reply
  • Ratman6161 - Tuesday, September 4, 2018 - link

    meh....only for games? So what. Yes, I know a lot of people reading this article care about games, but for those of us who don't this is meaningless. But looking at it as a gamer might, it still seems pretty worthless. Per soc and per game? That's going to take constant updates to keep up with the latest releases. And how long can they keep that up? Personally if I were that interested in games, I'd just buy something that's better at gaming to begin with. Reply
  • mode_13h - Tuesday, September 4, 2018 - link

    See my point above.

    Beyond that, the benefits of a scheme like this, even on "something that's better at gaming to begin with", is longer battery life and less heat. Didn't you see the part where it clocks everything just high enough to hit 60 fps? That's as fast as most phone's displays will update, so any more and you're wasting power.
    Reply
  • mode_13h - Tuesday, September 4, 2018 - link

    I would add that the biggest benefit is to be had by games, since they use the GPU more heavily than most other apps. They also have an upper limit on how fast they need to run.

    However, a variation on this could be used to manage the speeds of different CPU cores and the distribution of tasks between them.
    Reply

Log in

Don't have an account? Sign up now