Huawei’s GPU Turbo: Valid Technology with Overzealous Marketingby Ian Cutress & Andrei Frumusanu on September 4, 2018 9:00 AM EST
- Posted in
- Neural Networks
- Kirin 970
One of the biggest announcements from Huawei this year is that of its new GPU Turbo technology. The claims that it could provide more performance at less power, without a hardware change gave us quite a bit of pause. Internally, more than a few raised eyebrows appeared. As part of our discussions with Huawei this year at IFA, as well as some pretesting, we actually now have a base understanding of the technology, as well as additional insight into some of the marketing tactics – not all of which are the most honest representations of the new feature.
GPU Turbo: A Timeline
GPU Turbo is said to be a new mechanism that promised great performance and power improvements to new and existing devices. The new ‘technology’ was something that was first introduced in early June with the Chinese release of the Honor Play, and will be updated to a version ‘2.0’ with the launch of EMUI 9.0 later in the year.
Over the next few months Huawei plans to release the technology on all of its mainstream devices, as well as going back through its catalogue. Huawei promises that all devices, irrespective of hardware, will be able to benefit.
|GPU Turbo Rollout|
Mate 10 Pro
Mate 10 Porsche Design
Mate RS Porsche Design
Honor 9 lite
Mate 10 lite
|September||Honor View 10
Mate 9 Pro
|November||Honor 8 Pro|
From the weeks following the release of GPU Turbo on the first few devices, we saw quite a lot of hype and marketing efforts on Honor and Huawei’s side with the goal of promoting GPU Turbo. Over all the presentations, press releases promoted articles, and fuzzy press analysis, one important thing was consistently missing: we saw no technical explanation as to what GPU Turbo actually is and how it works. Everything was about results, but nothing was about details. At AnandTech, it’s the details that really resonate in our understanding, as to whether a new feature is genuine or not.
Huawei, to its credit, did try to reach out to us, but neither the company nor PR ever really responded when we asked for a more technical briefing on the new mechanism. We’re not sure what the reason was for this, as historically the company has often been open to technical discussions. On the plus side, at this year’s IFA, we finally had the chance to meet with a team of Huawei’s hardware and software engineers/managers.
Through these discussions, we developed some detailed explanations that finally made more sense of the past month’s marketing claims. The plus side of this is that we now have a better understanding of what GPU Turbo actually does (and it makes sense), although it also puts a chunk of the marketing slides on the ignore pile.
In this first piece on GPU Turbo, we’re going to go through the story of the feature in stages. First, we’ll look at Huawei’s initial claims about the technology: specifically the numbers. Second, we’ll go deeper into what GPU Turbo actually does. Third, we examine the devices we do have with the technology, to see what differences we can observe, and finally we address the marketing, which really needs to be scrutinized.
It should be noted that time permitting, and resources permitting, we want to go deeper into GPU Turbo. With the upcoming launch of the Mate 20 and the new Kirin 980 SoC inside, we will want to do a more detailed analysis with more data. This is only the beginning of the story into GPU Turbo.
Post Your CommentPlease log in or sign up to comment.
View All Comments
dave_the_nerd - Tuesday, September 4, 2018 - linkHonor 7x? How about the Mate SE, its Huawei twin? (We have one...)
mode_13h - Tuesday, September 4, 2018 - link> There is no silver bullet here – while an ideal goal would be a single optimized network to deal with every game in the market, we have to rely on default mechanisms to get the job done.
Why not use distributed training across a sampling of players (maybe the first to download each new game or patch) and submit their performance data to a cloud-based training service? The trained models could then be redistributed and potentially further refined.
tygrus - Tuesday, September 4, 2018 - linkThe benefits seem <10% and don't beat the competition (eg. Samsung S8 to S9 models).
Why 7 pages when it could have been done in 3 or 4 pages. The article needed more effort to edit and remove the repetition & fluff before publishing. Maybe I'm having a bad day but it just seemed harder to read than your usual.
LiverpoolFC5903 - Wednesday, September 5, 2018 - linkThat is the whole point of the article, to go into depth. If you dont like technical 'fluff', there are hundreds of sites out there with one pagers more to your liking. .
zodiacfml - Wednesday, September 5, 2018 - linkI guess, I im not wrong when I first of heard this. I was thinking "dynamic resolution" or image quality reducing feature that works on the fly.
s.yu - Friday, September 7, 2018 - linkTechnically Huawei isn't directly responsible, you know the article said that all Mali devices run on lower settings.
The thing you didn't anticipate is that they were citing numbers compared to Kirin 960, that threw everyone off. Seeing as they're Huawei I knew they were lying somehow, but seeing how Intel put their cross-generation comparisons in big bold print for so long I didn't realize somebody could hide such a fact in small print, or even omit that in most presentations.
Manch - Wednesday, September 5, 2018 - link<quote> (Ed: We used to see this a lot in the PC space over 10 years ago, where different GPUs would render different paths or have ‘tricks’ to reduce the workload. They don’t anymore.) </quote>
How does this not happen anymore? Both Nvidia & AMD create game ready drivers to optimize for diff games. AMD does tend to optimize for Mantle/Vulkan moreso than DX12(Explain SB AMD...WTF?). Regardless these optimizations are meant to extract the best performance per game. Part of that is reducing workload per frame to increase overall FPS, so I don't see how this does not happen anymore.
Ian Cutress - Wednesday, September 5, 2018 - linkThus comment was more about the days where 'driver optimizations' meant trading off quality for performance, and vendors were literally reducing the level of detail to get better performance. Back in the day we had to rename Quake3 to Quack3 to show the very visible differences that these 'optimizations' had on gameplay.
Manch - Wednesday, September 5, 2018 - linkAh OK, makes sense. I do remember those shenanigans.Thanks
Manch - Wednesday, September 5, 2018 - linkoops, messed up the quotes....no coffee :/