Alongside a sneak peek at their forthcoming Xe-HPG architecture, the other big reveal today from Intel’s consumer graphics group comes from the software side of the business. Along with preparing Intel’s software stack for the 2022 launch of the first Arc products, the group has also been hard at work at their own take on modern, neural net-driven image upscaling techniques. The product of that research is Xe Super Sampling, or XeSS, which Intel is pitching as the best solution yet for high image quality and low processing cost image upscaling.

As briefly hinted at by Intel at the start of this week with the announcement of their Arc video card brand, the company has been developing their own take on image upscaling. As it turns out, they’re actually quite far along, so for today they’re not just announcing XeSS, but they are showing off footage of the technology as well. Even better, the initial version of the SDK will be shipping to game developers later this month.

XeSS (pronounced “ex-ee-ess-ess”) is, at a high level, a combination spatial and temporal AI image upscaling technique, which uses trained neural networks to integrate both image and motion data in order to produce a superior, higher resolution image. This is a field of research that has seen a great deal of research in the last half-decade, and was brought to the forefront of the consumer space a couple of years ago by NVIDIA with their DLSS technology. Intel’s XeSS technology, in turn, is designed to address similar use cases, and from a technical perspective ends up looking a lot like NVIDIA’s current DLSS 2.x technology.

As with NVIDIA and AMD, Intel is looking to have their cake and eat it too with respect to graphics rendering performance. 4K monitors are increasingly cheap and plentiful, but the kind of performance needed to natively render at 4K in modern AAA games is outside the reach of all but the most expensive discrete video cards. Ultimately looking to find ways to drive these 4K monitors with more modest video cards and without the traditional drop in image quality, this has led recent research into smart image upscaling techniques, and ultimately DLSS, FSR, and now XeSS.

In choosing their approach, Intel seems to have gone in a similar direction as NVIDIA’s second attempt at DLSS. Which is to say, they’re using a combination of spatial data (neighboring pixels) and temporal data (motion vectors from previous frames) to feed a (seemingly generic) neural network that has been pre-trained to upscale frames from video games. Like many other aspects of today’s GPU-related announcements, Intel isn’t going into too much detail here. So there are plenty of outstanding questions about how XeSS handles ghosting, aliasing, and other artifacts that can arise from these upscaling solutions. With that said, what Intel is promising isn’t something that’s out of their reach if they’ve really done their homework.

Meanwhile, given the use of a neural network to handle parts of the upscaling process, it should come as no surprise that XeSS is designed to leverage Intel’s new XMX matrix math units, which are making their debut in the Xe-HPG graphics architecture. As we saw in our sneak peek there, Intel is baking quite a bit of matrix math performance into their hardware, and the company is no doubt interested in putting it to good use. Neural network-based image upscaling techniques remain one of the best ways to use that hardware in a gaming context, as the workload maps well to these systolic arrays, and their high performance keeps the overall hit to frame rendering times small.

With that said, Intel has gone one step further and is also developing a version of XeSS that doesn’t require dedicated matrix math hardware. Owing to the fact that the installation base for their matrix hardware is starting from 0, that they’d like to be able to use XeSS on Xe-LP integrated graphics, and that they want do everything possible to encourage game developers to adopt their XeSS technology, the company is developing a version of XeSS that instead uses the 4-element vector dot product (DP4a) instruction. DP4a support is found in Xe-LP along with the past few generations of discrete GPUs, making its presence near-ubiquitous. And while DP4a still doesn’t offer the kind of performance that a dedicated systolic array does – or the same range of precisions, for that matter – it’s a faster way to do math that’s good enough for a somewhat slower (and likely somewhat duller) version of XeSS.

By offering a DP4a version of XeSS, game developers will be able to use XeSS on virtually all modern hardware, including competing hardware. In that respect Intel is taking a page from AMD’s playbook, targeting their own hardware while also letting customers of competitors benefit from this technology – even if by not quite as much. Ideally, that will be a powerful carrot to entice game developers to implement XeSS in addition to (or even in place of) other upscaling techniques. And while we won’t put the cart before the horse, should XeSS live up to all of Intel’s performance and image quality claims, then Intel would be in the unique position of being able to offer the best of both worlds: an upscaling technology with wide compatibility like AMD’s FSR and the image quality of NVIDIA’s DLSS.

As an added kicker, Intel is also planning on eventually open sourcing the XeSS SDK and tools. At this juncture there are no further details on their commitment – presumably, they want to finish and refine XeSS before releasing their tech to the world – but this would be a further feather in Intel’s cap if they can deliver on that promise as well.

In the meantime, game developers will be able to get their first look at the technology later this month, when Intel releases the initial, XMX-only version of the XeSS SDK. This will be followed by the DP4a version, which will be released later this year.

Finally, along with today’s technology disclosure Intel has also posted some videos of XeSS in action, using an early version of the technology baked into a custom Unreal Engine demo. The minute or so of footage shows several image quality comparisons between native 4K rendering and XeSS, which is upscaling from a native 1080p image.

As with all vendor demos, Intel’s should be taken with a suitable grain of salt. We don’t have any specific framerate data to go with, and Intel’s demo is fairly limited. In particular, I would have liked to see something with more object motion – which tends to be harder on these upscalers – but for now, it is what it is.

With all of that said, at first glance the image quality with XeSS is quite good. In some respects it’s almost suspiciously good; as Ian quickly picked up on, the clarity of the “ventilation” text in the above nearly rivals the native 4K renderer, making it massively clearer than the illegible mess on the original 1080p frame. This is solid evidence that as part of XeSS Intel is also doing something outside the scope of image upscaling to improve texture clarity, possibly by enforcing a negative LOD bias on the game engine.

In any case, like the rest of Intel’s forthcoming slate of GPU technologies, this won’t be the last we hear of XeSS. What Intel is demonstrating so far certainly looks promising, but it’s going to be their ability to deliver on those promises to both game developers and gamers that will matter in the end. And if Intel can indeed deliver, then they’re set to become a very welcome third player in the image upscaling technology race.

Performance Improvements For Intel’s Core Graphics Driver

Last but not least, while XeSS was the star of the show for Intel’s graphics software group, the company also delivered a brief update on the state of their core graphics driver that included a few interesting tidbits.

As a quick refresher, Intel these days is using a unified core graphics driver for their entire slate of modern GPUs. As a result, the work that has gone into the driver to prepare it for the launch of Xe-HPG can benefit existing Intel products (e.g. Xe-LP), and improvements made for current products get fed into the driver that will underpin future Xe-HPG products. While this is no different than how rival AMD operates, Intel’s expansion into discrete graphics has meant that the company has needed re-focus on the state of their graphics driver. What was good enough for an integrated product in terms of performance and features will not cut it in the discrete graphics space, where customers spending hundreds of dollars on a video card will have higher expectations on both fronts.

Of recent note, Intel has completed a significant overhaul of both its GPU memory manager and its shader compiler. The net impact of these changes includes improving game loading times by up to 25%, and improved the throughput of CPU-bound games by up to 18%. In the case of the former, by getting smarter about how and where they compile shaders – including eliminating redundant compilations and doing a better job at scheduling compiler threads. As well, Intel has also refactored parts of their memory management code to better optimize the VRAM utilization of their discrete graphics products. Intel of course just launched their first discrete product earlier this year with DG1, so this is a good example of the kind of additional optimization work facing Intel as they branch out into discrete graphics.

Finally, for features and functionality, the software group is also planning on releasing a suite of new driver features. Chief among these will be integrating all of their performance and overclocking controls directly into the company’s Graphics Command Center application. Intel will also be taking a page from NVIDIA and AMD’s current feature sets by adding new features for game streamers, including a fast stream capture path using Intel’s QuickSync encoder, automatic game highlights, and support for AI-assisted cameras. These features should be ready in time for the Intel Arc launch in Q1 of next year.

Comments Locked


View All Comments

  • thestryker - Thursday, August 19, 2021 - link

    When I see what a big swing they're making on the software side I can't help but wonder if a big reason for them shipping 22q1 rather than 21q4 is software related. No matter what I hope they're going to have some sort of controls on pricing so that if/when they sell direct to consumers we don't see a used markups.
  • thestryker - Thursday, August 19, 2021 - link

    *huge markup

    Looking forward to the day edit exists!
  • Kamen Rider Blade - Thursday, August 19, 2021 - link

    This launch will prove if Raja Koduri is a real GPU genius, or a joke and AMD was better off once he left.
  • mode_13h - Thursday, August 19, 2021 - link

    He's so high-level that I wonder... If he was bad enough, no doubt he could throw several wrenches into the works. However, if the people under him are good enough, they can surely carry the project without his help.

    Having an incompetent boss just means you have to "manage up". It's taxing, but it can be done.
  • Kurosaki - Thursday, August 19, 2021 - link

    It's so sad to see every gpu manufacturer wasting precious die space for the image quality degenerative scaling. I bet in 10 years time, we will have to live with this crap whether we choose to or not. DLSS and the likes are not image improving, why go to such lengths to compete in the image tearing techniques?
  • mikeztm - Thursday, August 19, 2021 - link

    DLSS is a image improving technic.
    It generate image with smaller PSMR compare to original high resolution image and is mathmaticaly improving image.
  • Kurosaki - Friday, August 20, 2021 - link

    It will never compare to native resolutions. It's like interpolation is the new black. I'll never turn that on as long it's not forced. Use the diespace for more shaders instead. Or lower the costs by making smaller chips, hell, lower class cards as 3060 and 6700 costs like high tier cards did a couple of years ago.
  • jordanclock - Friday, August 20, 2021 - link

    The 6700 does not contain anything comparable to the tensor cores found on Nvidia GPUs, so your comparison doesn't make sense. That is an example of exactly what you want: A GPU with more shaders and no dedicated ML hardware. But somehow the 6700 isn't magically cheaper. Weird, huh?

    Using die space for tensor cores like Nvidia has done has been nothing but an improvement for users. It means gamers can play games at higher resolutions than would be possible by throwing more shader cores at the GPU and it means that professionals with ML workloads get vastly improved local performance.
  • mode_13h - Saturday, August 21, 2021 - link

    I agree with you 99%, although the RDNA cards do burn a bit of die space on "rapid packed math" instructions. Not on par with tensor cores, either in terms of performance or die space.
  • Sushisamurai - Thursday, August 26, 2021 - link

    lol, that comment on the 6700 being exactly what he wants and it's not cheaper. I died. So true, yet so sad.

Log in

Don't have an account? Sign up now