We’ve been following DirectX 12 for about 2 years now, watching Microsoft’s next-generation low-level graphics API go from an internal development project to a public release. Though harder to use than earlier high-level APIs like DirectX 11, DirectX 12 gives developers more control than ever before, and for those who can tame it, they can unlock performance and develop rendering techniques simply not possible with earlier APIs. Coupled with the CPU bottlenecks of DirectX 11 coming into full view as single-threaded performance increases have slowed and CPUs have increased their core counts instead, and DirectX 12 could not have come at a better time.

Although DirectX 12 was finalized and launched alongside Windows 10 last summer, we’ve continued to keep an eye on the API as the first games are developed against it. As developers need the tools before they can release games, there’s an expected lag period between the launch of Windows 10 and when games using the API are ready for release, and we are finally nearing the end of that lag period. Consequently we’re now getting a better and clearer picture of what to expect with games utilizing DirectX 12 as those games approach their launch.

There are a few games vying for the title of the first major DirectX 12 game, but at this point I think it’s safe to say that the first high profile game to be released will be Ashes of the Singularity. This is due to the fact that the developer, Oxide, has specifically crafted an engine and a game meant to exploit the abilities of the API – large numbers of draw calls, asynchronous compute/shading, and explicit multi-GPU – putting it a step beyond adding DX12 rendering paths to games that were originally designed for DX11. As a result, both the GPU vendors and Microsoft itself have used Ashes and earlier builds of its Nitrous engine to demonstrate the capabilities of the API, and this is something we’ve looked at with both Ashes and the Star Swarm technical demo.

Much like a number of other games these days, Ashes of the Singularity for its part has been in a public beta via Steam early access, while its full, golden release on March 22nd is fast approaching. To that end Oxide and publisher Stardock are gearing up to release the second major beta of the game, and the last beta before the game goes gold. At the same time they’ve invited the press to take a look at the beta and its updated benchmark ahead of tomorrow’s early access release, so today we’ll be taking a second and more comprehensive look at the game.

The first time we poked at Ashes was to investigate an early alpha of the game’s explicit multi-GPU functionality. Though only in a limited form at the time, Oxide demonstrated that they had a basic implementation of DX12 multi-GPU up and running, allowing us to not only pair up similar video cards, but dissimilar cards from opposing vendors, making a combined GeForce + Radeon setup a reality. This early version of Ashes showed a lot of promise for DX12 multi-GPU, and after some additional development it is now finally being released to the public as part of this week’s beta.

Since that release Oxide has also been at work both cleaning up the code to prepare it for release, and implementing even more DX12 functionality. The latest beta adds greatly improved support another one of DX12’s powerhouse features: asynchronous shading/computing. By taking advantage of DX12’s lower-level access, games and applications can directly interface with the various execution queues on a GPU, scheduling work on each queue and having it executed independently. Async shading is another one of DX12’s optimization features, allowing for certain tasks to be completed in less time (lower throughput latency) and/or to better utilize all of a GPU’s massive arrays of shader ALUs.

Between its new functionality, updated graphical effects, and a significant amount of optimization work since the last beta, the latest beta for Ashes gives us quite a bit to take a look at today, so let’s get started.

More on Async Shading, the New Benchmark, & the Test
Comments Locked

153 Comments

View All Comments

  • permastoned - Sunday, February 28, 2016 - link

    Wasn't trolling - there are other metrics that show the case; for you to imply that 3dmark isn't valid is just silly: http://wccftech.com/amd-r9-290x-fast-titan-dx12-en...

    Another thing; what's the deal with all these fanboys? There is no benefit to being a fanboy of either AMD or Nvidia, it is just going to cause you problems because it may cause you to buy based on brand, rather than on performance per dollar, which is the factor that actually matters. At different price ranges different brands are better - e.g top end, a 980Ti is better than a fury X, however if you are looking in the price bracket below, and want buy a 980, you will get better performance and performance per dollar from a standard fury.

    Being a fanboy will blind you from accepting the truth when the tides shift and the tables eventually turn. It helps you in no way at all, it disadvantages you in many. It also causes you to get angry on forums for no reason, and call people 'trolls' when they are stating facts.
  • Soulwager - Sunday, March 20, 2016 - link

    Poorly how, exactly? It looks to me like DX12 is just removing a bottleneck for AMD that Nvidia already fixed in DX11. It would be more correct to say that AMD has poor DX11 performance compared to Maxwell, and neither are constrained by driver overhead in DX12.
  • SunLord - Wednesday, February 24, 2016 - link

    DX12 by desing will slightly favor older AMD designs simply because of the design decisions that AMD made compared to Nvidia with regards DX11 that are paying off with Dx12 while Nvidia benefited from it with DX11 games which is why they own around 80% or so of the gaming GPU market. How much of an impact this will be depends on the game just like how it is with DX11 games some do better on AMD some will be better on Nvidia.
  • anubis44 - Thursday, February 25, 2016 - link

    If results like these continue with other DX12 games, nVidia's going to be the one with only 20% in a matter of months.
  • althaz - Thursday, February 25, 2016 - link

    Even in generation where AMD/ATI have been dominant in terms of performance and value, they've still not really dominated in sales.

    Just like even when AMD's CPUs were offering twice the performance per watt and cheaper performance per dollar, they still sold less than Intel.

    Doing it for a short time isn't enough, you have to do it for *years* to get a lead like nVidia has.

    Firstly you have to overturn brand-loyalty from complete morons (aka everybody with any brand loyalty to any company, these are corporations that only care about the contents of your wallet, make rational choices). That will happen only a small percentage of people at a time. So you have to maintain a pretty serious lead for a long time to do it.

    AMD did manage to do it in the enthusiast space with CPUs, but (arguably due to Intel being dodgy pricks) they didn't quite turn that into mainstream market dominance. Which sucks for them, because they absolutely deserved it.

    So even if AMD maintains this DX12 lead for the rest of the year and all of the next, they'll still sell less GPUs than nVidia will in that time. But if they can do it for another year after that, *then* would they be likely to start winning the GPU war.

    Personally, I don't care a lot. I hope AMD do better because they are losing and competition is good. However, I will make my next purchasing decision on performance and price, nothing else.
  • permastoned - Sunday, February 28, 2016 - link

    Wasn't trolling - there are other metrics that show the case; for you to imply that 3dmark isn't valid is just silly: http://wccftech.com/amd-r9-290x-fast-titan-dx12-en...

    2 points = trend.

    Another thing; what's the deal with all these fanboys? There is no benefit to being a fanboy of either AMD or Nvidia, it is just going to cause you problems because it may cause you to buy based on brand, rather than on performance per dollar, which is the factor that actually matters. At different price ranges different brands are better - e.g top end, a 980Ti is better than a fury X, however if you are looking in the price bracket below, and want buy a 980, you will get better performance and performance per dollar from a standard fury.

    Being a fanboy will blind you from accepting the truth when the tides shift and the tables eventually turn. It helps you in no way at all, it disadvantages you in many. It also causes you to get angry on forums for no reason, and call people 'trolls' when they are stating facts.
  • Continuity28 - Wednesday, February 24, 2016 - link

    By the time DX12 becomes commonplace, I'm sure they will have cards that were built for DX12.

    It makes a lot of sense to design your cards around what will be most useful today, not years in the future when people are replacing their cards anyways. Does it really matter if AMD's DX12 performance is better when it isn't relevant, when their DX11 performance is worse when it is relevant?
  • Senti - Wednesday, February 24, 2016 - link

    Indeed it makes much sense to build cards exactly for today so people would be forced to buy new hardware next year to have decent performance. From certain green point of view. But many people are actually hoping that their brand new mid-top card would last with decent performance at least some years.
  • cmdrdredd - Wednesday, February 24, 2016 - link

    Hardware performance for new APIs is always weak with first gen products. That isn't changing here. When there are many DX12 titles out and new cards are out there, you'll see that people don't want to try playing with their old cards and will be buying new. That's how it works.
  • ToTTenTranz - Wednesday, February 24, 2016 - link

    "Hardware performance for new APIs is always weak with first gen products."

    Except that doesn't seem to be the case with 2012's Radeon line.

Log in

Don't have an account? Sign up now