When we ran our Ask the Experts with ARM CPU guru Peter Greenhalgh some of you had GPU questions that went unanswered. A few weeks ago we set out to address the issue and ARM came back with Jem Davies to help. Jem is an ARM Fellow and VP of Technology in the Media Processing Division, and he's responsible for setting the GPU and video technology roadmaps for the company. Jem is also responsible for advanced product development as well as technical investigations of potential ARM acquisitions. Mr. Davies holds three patents in the fields of CPU and GPU design and got his bachelor's from the University of Cambridge.

If you've got any questions about ARM's Mali GPUs (anything on the roadmap at this point), the evolution of OpenGL, GPU compute applications, video/display processors, GPU power/performance, heterogeneous compute or more feel free to ask away in the comments. Jem himself will be answering in the comments section once we get a critical mass of questions.

POST A COMMENT

99 Comments

View All Comments

  • JemDavies - Wednesday, July 2, 2014 - link

    Thanks for the comments. We thought that the headlines on the Dolphin post were mainly complaining about missing desktop OpenGL drivers and open source drivers/documentation, which I have addressed already. Whilst I am sure that all comments about bugs in drivers reflect real frustration on the part of developers, we have also received plaudits from our Partners for the support we give and the quality of our drivers. We scored highest in an analysis of our OpenGL ES compiler, for example. If you look at the conversations on our developer community website (http://community.arm.com/groups/arm-mali-graphics)... you can see many people getting their comments and questions addressed, so I question whether the perception that you describe is completely widespread. However we do take it seriously.

    One thing we are working on to improve the experience for developers is to improve the speed with which new, improved drivers are rolled out through our Partners. We understand there is an issue there and delays in that process do cause irritation to developers and we are working closely with our Partners on this issue.
    Reply
  • jospoortvliet - Friday, July 4, 2014 - link

    ... And that is exactly where not having open source drivers hurts you, as it has resulted in loads of unsupported and thus broken devices on the market and slower than needed update cycles. It is a shame ARM still doesn't get that innovation and time to market advantages of open source are important for its ecosystem and the only thing that might give it a chance against Intel. Reply
  • jospoortvliet - Friday, July 4, 2014 - link

    As I've heard and seen these arguments at several industries before, let me take a stab at what I guess are some of the arguments against making the drivers open source at ARM:
    * "It gives away our intellectual property"
    That's not a concept. Copyright and patents are. You don't give either away. They are actually extremely well protected: IF somebody would try to patent something you're doing, the code can prove you did it before. Pointing to an online repository with check-in date of the code works far better than proving you had it somewhere internally. You think you give away some unique, special coding tricks to your competitors? News flash: you're not that unique or smart. There's about a 0% chance you do something super-special that is worth protecting. Hope I didn't crush an ego or two.
    * Our competitors will copy our code and use it
    Still thinking you are special and smart? Guess how often this argument comes up in these discussions and guess how often it happens in practice. Answering 100% for the first and 0% for the second will give you bonus points. The reason for both: engineers always think they are smarter than other engineers. And they are always wrong. Collectively, ARM engineers are not smarter than the collective intelligence of people who do NOT work for ARM. This, generalized, is why open source always wins. You really think that going over all your code, hoping to extract a super-brilliant bit that just happens to fit exactly with their GPU architecture is something a competitor is going to throw resources at? If they would, you should be happy about it, as it would be wasting their time and money.
    * It is a big investment to open source things, you have to go over every line of code
    Dude. Really? Who told you that? Your lawyers? Like they are worth listening to. Lawyers always take the easy road out. Everything closed and ideally secured against alien invasion. That is how you kill companies. Reality might not be fluffy bunnies but it isn't what the lawyers say either. Just hire some people who actually know what they are talking about. Heck, I bet you already have those. Some auditing will be needed, a few lines of code from a supplier will be accidentally open sourced (which will prove to be utterly uninteresting) and nobody will get hurt.
    * Our code is ugly
    So? Nobody cares, and if they do, tell them to (help) fix it.
    * It is a security risk
    Only if you still live in the 80's and follow its security philosophy. It's 2014, so repeat after me: "security through obscurity DOES NOT WORK".
    * It is hard work to work in the open and it slows us down
    It will take a little getting used to and you'll have a few project managers who still don't get it barking at you but reality is: open source is FASTER. Very much so. Linux kernel anybody?
    * We don't need it
    Yeah, that's what Google said when they effectively forked the Linux Kernel with their first Android version. They don't have the excuse to claim they are 'just a small compay', yet even they could not keep up with development and had to come back to the kernel community with their tail between their legs. You really think you can do better? There's that massive ego again!

    I could go on for a while, but I'm hoping you have some competent people who already explained all these things to the naysayers - and the latter just don't have the insight to get it, claiming things are 'more complicated' and all that. I've seen it before. And I get the conservatism - better do it properly than half-way (look at how sun screwed up). But mark my words: either ARM opens up, or it goes down. It really is 2014 and especially small players like ARM have to collaborate and stand on the shoulders of giants to be able to really compete in the long run. Especially with players like Intel around - which, for its size, is a surprisingly agile and smart company.

    Good luck.
    Reply
  • saratoga3 - Monday, June 30, 2014 - link

    Hi Jem,

    While its nice to hear that ARM will be improving its Direct X drivers should Windows on ARM ever take off, that is likely to be of limited consolation to the Dolphin project, the various game developers I see on reddit complaining about ARM's poor driver quality on Android, or those Google developers linked above :)

    What I am curious about is what ARM will be doing in the meantime to address the perception that its GPU's difficult to develop for due to poor driver performance, limited documentation, and buggy implementation of many features relative to Nvidia/AMD/Intel? Does ARM believe that there is a need for improvement, or are these developer concerns overblown? And if so, will they be taking steps to improve the situation? Or at least to improve communications with developers?
    Reply
  • ltcommanderdata - Monday, June 30, 2014 - link

    Imagination views OpenCL full profile and features like fp64 support and precise rounding modes as unnecessary and an inefficient use of transistors on mobile. Do you have examples of common mobile use cases that OpenCL full profile would enable over embedded profile?

    OpenGL ES 3.x explicitly did not include geometry shader and tessellation support presumably due to die area, power consumption, and market need concerns. However, many mobile GPUs are implementing full desktop DX11 support in hardware anyways. Is geometry shader and tessellation support die area intensive? Ideally we'd get both of course, but based on current and near future market needs, are game developers better served by devoting transistors on extra features like geometry shaders and tessellation or would those transistors be better used to add more ALUs for more raw performance?

    Are Mali GPU owners getting the latest graphics drivers effectively? If not, is this a major impediment to game developers and users and how can this be improved?

    Low-level graphics APIs are all the rage now. What is ARM's take on this? Can OpenGL ES be improved to increase efficiency and reduce CPU overhead through just writing better drivers, new extensions, or standard API evolution as part of OpenGL ES 4.0? Or is a new cross-vendor, cross-platform API needed? And with most mobile GPUs implementing the latest desktop feature-set is it necessary any more to have dedicated desktop and mobile graphics APIs or should they be streamlined and merged?
    Reply
  • JemDavies - Monday, June 30, 2014 - link

    We don’t regard tessellation support as area-intensive. The Midgard architecture is naturally flexible and supports tessellation without the need for additional tessellation units.

    Low-level (or “low-overhead”) APIs are indeed very fashionable right now. These APIs do away with a lot of state-tracking and error-checking so that black-belt Ninja programmers can (in some cases) gain greater performance with code they can trust to be correct. Of course this only applies to code that is limited in terms of performance by the performance of the driver rather than by the GPU itself. Well-structured code usually drives the GPU flat out and is not limited by the performance of the driver.

    Obviously I cannot comment on what is going on in the confidential discussion inside Khronos, but I would say that there is a great deal of interest across the industry in a modern cross-platform API and ARM is playing a central role in these discussions. It needs to be cross-platform as developers want to gain maximum return on their investment of writing code. They would obviously prefer to write it once rather than write it multiple times for multiple platforms.

    As to merging desktop OpenGL and OpenGL ES, the same pressures exist – that of developers wanting common APIs.
    Reply
  • eastcoast_pete - Monday, June 30, 2014 - link

    How well do the 450 and 760 MALIs scale with the core numbers?, Are there any 760-8 and 760-16 chips underway out there? Last one: how much thermal throttling is to be expected in real-life situations for either a 450MP8 and a 760MP8, i.e. in a waterproof phablet/tablet? Reply
  • JemDavies - Monday, June 30, 2014 - link

    Whilst we cannot comment on the chips our Partners are building until they have announced them, you will certainly find chips coming with the higher numbers of cores. Graphics is an easily-parallelisable problem, so the scaling that tends to be seen is fairly linear up to those sorts of numbers, providing that the memory system can supply the necessary memory bandwidth.

    Any thermal throttling is so bound up with the Partners’ implementations (what silicon process, style of implementation, numbers of cores, target frequency, thermal controls, which CPUs are paired with the GPU, case design, etc.) that it is really hard for me to give a useful answer. Sorry!
    Reply
  • shlagevuk - Monday, June 30, 2014 - link

    Hi and thank for your time,

    Do you think adding a raytracing unit like the powervr 6500 is a good idea in an heterogeneous computing system?
    Reply
  • JemDavies - Wednesday, July 2, 2014 - link

    As I mentioned to bengildenstein previously, ray-tracing, is a great technology that can produce fantastic-looking images when rendered on supercomputer render farms. However, it is a long way from being deployable as real-time in mobile devices: the power requirements are too high as it remains a bit of a brute-force method. However, it is easy to confuse ray-tracing (a solution) with Global Illumination (the problem). Global Illumination (GI) is the problem of trying to add more realistic lighting to 3D scenes by taking into account direct and indirect lighting. This is a really important issue in providing more compelling, realistic images in graphics content.

    In order to address GI in mobile, towards the end of 2013, ARM acquired Geomerics - another Cambridge-based company that have created some fantastic technology to address GI. Their product Enlighten is the industry’s most advanced dynamic lighting technology. It is the only product with proven ability to deliver real-time global illumination on today’s and tomorrow’s consoles and gaming platforms. Enlighten is behind the lighting in best-selling titles including Battlefield 3, Need for Speed: The Run, Eve Online and Quantum Conundrum and Enlighten has been licensed by many of the top developers in the industry, including EA DICE, EA Bioware, THQ, Take 2 and Square Enix. Their technology is very lightweight, and is optimised for all games platforms, including mobile. We are very happy to have them on board, and they are starting to influence our future technology roadmap already.
    Reply

Log in

Don't have an account? Sign up now