During this week, both GDC (the Game Developers’ Conference) and GTC (the Game Technology Conference) are happing in California, and NVIDIA is out in force. The company's marquee gaming-related announcement today is that, as many have been expecting would happen, NVIDIA is bringing DirectX 12 DXR raytracing support to the company's GeForce 10 series and GeForce 16 series cards.

When Microsoft first announced the DirectX Raytracing API just over a year ago, they set out a plan for essentially two tiers of hardware. The forward-looking plan (and long-term goal) was to get manufacturers to implement hardware features to accelerate raytracing. However in the shorter term, and owing to the fact that the API doesn't say how ray tracing should be implemented, DXR would also allow for software (compute shader) based solutions for GPUs that lack complete hardware acceleration. In fact, Microsoft had been using their own internally-developed fallback layer to allow them to develop the API and test against it internally, but past that they left it up to hardware vendors if they wanted to develop their own mechanisms for supporting DXR on pre-raytracing GPUs.

NVIDIA for their part has decided to go ahead with this, announcing that they will support DXR on many of their GeForce 10 (Pascal) and GeForce 16 (Turing GTX) series video cards. Specifically, the GeForce GTX 1060 6GB and higher, as well as the new GTX 1660 series video cards.

Now, as you might expect, raytracing performance on these cards is going to be much (much) slower than it is on NVIDIA's RTX series cards, all of which have hardware raytracing support via NVIDIA's RTX backend. NVIDIA's official numbers are that the RTX cards are 2-3x faster than the GTX cards, however this is going to be workload-dependent. Ultimately it's the game and the settings used that will determine just how much of an additional workload raytracing will place on a GTX card.

The inclusion of DXR support on these cards is functional – that is to say, its inclusion isn't merely for baseline featureset compatibility, ala FP64 support on these same parts – but it's very much in a lower league in terms of performance. And given just how performance-intensive raytracing is on RTX cards, it remains to be seen just how useful the feature will be on cards lacking the RTX hardware. Scaling down image quality will help to stabilize performance, for example, but then at that point will the image quality gains be worth it?

Under the hood, NVIDIA is implementing support for DXR via compute shaders run on the CUDA cores. In this area the recent GeForce GTX 16 series cards, which are based on the Turing architecture sans RTX hardware, have a small leg up. Turing includes separate INT32 cores (rather than tying them to the FP32 cores), so like other compute shader workloads on these cards, it's possible to pick up some performance by simultaneously executing FP32 and INT32 instructions. It won't make up for the lack of RTX hardware, but it at least gives the recent cards an extra push. Otherwise, Pascal cards will be the slowest in this respect, as their compute shader-based path has the highest overhead of all of these solutions.


This list of cards includes mobile equivalents

One interesting side effect is that because DXR support is handled at the driver level, the addition of DXR support is supposed to be transparent to current DXR-enabled games. That means developers won't need to issue updates to get DXR working on these GTX cards. However it goes without saying that because of the performance differences, they likely will want to anyhow, if only to provide settings suitable for video cards lacking raytracing hardware.

NVIDIA's own guidance is that GTX cards should expect to run low-quality effects. Users/developers will also want to avoid the most costly effects such as global illumination, and stick to "cheaper" effects like material-specfic reflections.

Diving into the numbers a bit more, in one example, NVIDIA showed a representative frame of Metro Exodus using DXR for global illumination. The top graph shows a Pascal GPU, with only FP32 compute, having a long render time in the middle for the effects. The middle bar, showing an RTX 2080 but could equally be a GTX 1660 Ti, shows FP32 and INT32 compute working together during the RT portion of the workload and speeding the process up. The final bar shows the effect of adding RT cores to the mix, and tensor cores at the end.

Ultimately the release of DXR support for NVIDIA's non-RTX cards shouldn't be taken as too surprising – on top of the fact that the API was initially demoed on pre-RTX Volta hardware, NVIDIA was strongly hinting as early as last year that this would happen. So by and large this is NVIDIA fulfilling earlier goals. Still, it will be interesting to see whether DXR actually sees any use on the company's GTX cards, or if the overall performance is simply too slow to bother. The performance hit in current games certainly favors the latter outcome, but it's going to be developers that make or break it in the long run.

Meanwhile, gamers will get to find out for themselves with NVIDIA's DXR-enabled driver release, which is expected to come out next month.

Source: NVIDIA

Comments Locked

38 Comments

View All Comments

  • cmdrdredd - Monday, March 18, 2019 - link

    Go back and read the article again. It says DXR will allow non RT products to enable "some" Ray Tracing effects. It didn't say that all the same effects will be enabled. They even went so far as to say it's a stripped down version in order to keep a playable frame rate. That doesn't necessarily mean 60fps but you can get global illumination or better shadows etc using DXR and perhaps something near 30fps which IMO isn't terrible for certain types of games.
  • blppt - Monday, March 18, 2019 - link

    Either way, it lets you whet your appetite on "Ray Tracing Lite" which might entice you to upgrade to the slow-selling RTX.
  • Qasar - Monday, March 18, 2019 - link

    the only thing that would entice me to upgrade to RTX would be for nvidia to drop the price between $200 on the low end entry 2060 and 1k on the high end :-) current 20 RTX series cards range from $470 cdn to has high as $2100 cdn at the very top........
  • nathanddrews - Tuesday, March 19, 2019 - link

    "New GPUs offer better image quality via new features, more at 11."
  • CiccioB - Tuesday, March 19, 2019 - link

    Yes, enabling RT effects on everything is a marketing move to make both developers start using those new features and to wet players' tongue with the salty sauce.
    But this has never been a real problem.
    MS DXR is a general API as are all DX API. You can do them in HW or in SW or s mix. There are no specific HW requirements on anything, just on functionalities that can be achieved as anyone wants with whatever architecture he wants (or is able to create).
    So why investing so much money on things as new HW architectures each year?
    Because what changes between using dedicated HW units and not are performances, which is ultimately what games buy.

    Is RT possible on HW without dedicated units?
    Yes, it has always been. See the demos nvidia made with Volta and then the same done with Turing to see what dedicated units can achieve.
  • CiccioB - Tuesday, March 19, 2019 - link

    "what games buy" -> "what gamers buy"
  • haukionkannel - Tuesday, March 19, 2019 - link

    Yep! Good marketing to make old cards to look really slow!
  • blppt - Tuesday, March 19, 2019 - link

    It actually is, if your goal is to move more silicon.

    Otherwise, given the meh speed increases for non-RT games that a 2080Ti provides over the late model heavily factory oc'd 1080ti's, gamers have little incentive to drop $1200 on a new video card.
  • Cullinaire - Monday, March 18, 2019 - link

    The thing that interests me most about RT is the end of weird and fuzzy shadows, esp self shadowing.
  • CiccioB - Tuesday, March 19, 2019 - link

    Shadows, illumination, reflections, but also refraction (with enough calculation power), but also AI. What is lacking in today's games is gameplay, which in most cases is highly repetitive (who said FarCry?).
    AI applied to opponent strategy may really change the final quality of games, which is not only graphics effects and pumpued-up texturing (to balance the low polygon counts that consoles can barely stand).

Log in

Don't have an account? Sign up now