Gaming Performance: 1080p

All of our game testing results, including other resolutions, can be found in our benchmark database: www.anandtech.com/bench. All gaming tests were with an RTX 2080 Ti.

For our gaming tests in this review, we re-benched the Ryzen 7 5800X processor to compare it directly against the newer Ryzen 7 5800X3D on Windows 11. All previous Ryzen 5000 processor were tested on Windows 10, while all of our Intel Alder Lake (12th Gen Core Series) testing was done on Windows 11.

We are using DDR4 memory at the following settings:

  • DDR4-3200

Civilization VI

(b-7) Civilization VI - 1080p Max - Average FPS

(b-8) Civilization VI - 1080p Max - 95th Percentile

Final Fantasy 14

(d-4) Final Fantasy 14 - 1080p Max - Average FPS

Final Fantasy 15

(e-3) Final Fantasy 15 - 1080p Standard - Average FPS

(e-4) Final Fantasy 15 - 1080p Standard - 95th Percentile

World of Tanks

(f-3) World of Tanks - 1080p Standard - Average FPS

(f-4) World of Tanks - 1080p Standard - 95th Percentile

Borderlands 3

(g-7) Borderlands 3 - 1080p Max - Average FPS

(g-8) Borderlands 3 - 1080p Max - 95th Percentile

Far Cry 5

(i-7) Far Cry 5 - 1080p Ultra - Average FPS

(i-8) Far Cry 5 - 1080p Ultra - 95th Percentile

Gears Tactics

(j-7) Gears Tactics - 1080p Ultra - Average FPS

(j-8) Gears Tactics - 1080p Ultra - 95th Percentile

Grand Theft Auto V

(k-7) Grand Theft Auto V - 1080p Max - Average FPS

(k-8) Grand Theft Auto V - 1080p Max - 95th Percentile

Red Dead Redemption 2

(l-7) Red Dead 2 - 1080p Max - Average FPS

(l-8) Red Dead 2 - 1080p Max - 95th Percentile

Strange Brigade (DirectX 12)

(m-7) Strange Brigade DX12 - 1080p Ultra - Average FPS

(m-8) Strange Brigade DX12 - 1080p Ultra - 95th Percentile

Strange Brigade (Vulcan)

(n-7) Strange Brigade Vulkan - 1080p Ultra - Average FPS

(n-8) Strange Brigade Vulkan - 1080p Ultra - 95th Percentile

Focusing on our test suite at 1080p resolutions, again the AMD Ryzen 7 5800X3D performs well compared with other Ryzen 5000 processors and Intel's Alder Lake processors.  In certain situations, Intel's 12th Gen Core with its higher IPC performance and faster core frequencies performs better, but only in certain titles where extra L3 cache doesn't have an effect on performance.

In titles that favor V-Cache, the performance differences are pretty conclusive and where extra L3 cache can be utilized, the 5800X3D and its 96 MB of 3D V-Cache sit comfortably above the competition.

Gaming Performance: 720p and Lower Gaming Performance: 4K
Comments Locked

125 Comments

View All Comments

  • Gavin Bonshor - Thursday, June 30, 2022 - link

    We test at JEDEC to compare apples to apples from previous reviews. The recommendation is on my personal experience and what AMD recommends.
  • HarryVoyager - Thursday, June 30, 2022 - link

    Having done an upgrade from a 5800X to a 5800X3D, one of the interesting things about the 5800X3D is that its largely RAM insensitive. You can get pretty much the same performance out of DDR4-2366 as you can 3600+.

    And its not that it is under-performing. The things that it beats the 5800X at, it still beats it at, even when the 5800X is running very fast low latency RAM.

    The up shot is, if you're on an AM4 platform with stock ram, you actually get a lot of improvement from the 5800X3D in its favored applications
  • Lucky Stripes 99 - Saturday, July 2, 2022 - link

    This is why I hope to see this extra cache come to the APU series. My 4650G is very RAM speed sensitive on the GPU side. Problem is, if you start spending a bunch of cash on faster system memory to boost GPU speeds, it doesn't take long before a discrete video card becomes the better choice.
  • Oxford Guy - Saturday, July 2, 2022 - link

    The better way to test is to use both the optimal RAM and the slow JEDEC RAM.
  • sonofgodfrey - Thursday, June 30, 2022 - link

    Wow, awhile since I looked at these gaming benchmarks. These FPS times are way past the point of "minimum" necessary. I submit two conclusions:
    1) At some point you just have to say the game is playable and just check that box.
    2) The benchmarks need to reflect this result.

    If I were doing these tests, I would probably just set a low limit for FPS and note how much (% wise) of the benchmark run was below that level. If it is 0%, then that CPU/GPU/whatever combination just gets a "pass", if not it gets a "fail" (and you could dig into the numbers to see how much it failed).

    Based on this criteria, if I had to buy one of these processors for gaming, I would go with the least costly processor here, the i5-12600k. It does the job just fine, and I can spend the extra $210 on a better GPU/Memory/SSD.
    (Note: I'm not buying one of these processors, I don't like Alder Lake for other reasons, and this is not an endorsement of Alder Lake)
  • lmcd - Thursday, June 30, 2022 - link

    Part of the intrigue is that it can hit the minimums and 1% lows for smooth play with 120Hz/144Hz screens.
  • hfm - Friday, July 1, 2022 - link

    I agree. I'm using a 5600X + 3080 + 32GB dual channel dual rank and my 3080 is still the bottleneck most of the time at the resolution I play all my games in, 1440p@144Hz
  • mode_13h - Saturday, July 2, 2022 - link

    > These FPS times are way past the point of "minimum" necessary.

    You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.

    And the relevance of such testing is because future games will probably lean more heavily on the CPU than current games. So, even at higher resolutions, we should expect to see future game performance affected by one's choice of a CPU, today, to a greater extent than current games are.

    So, in essence, what you're seeing is somewhat artificial, but that doesn't make it irrelevant.

    > I would probably just set a low limit for FPS and
    > note how much (% wise) of the benchmark run was below that level.

    Good luck getting consensus on what represents a desirable framerate. I think the best bet is to measure mean + 99th percentile and then let people decide for themselves what's good enough.
  • sonofgodfrey - Tuesday, July 5, 2022 - link

    >Good luck getting consensus on what represents a desirable framerate.

    You would need to do some research (blind A-B testing) to see what people can actually detect.
    There are probably dozens of human factors PhD thesis about this in the last 20 years.
    I suspect that anything above 60 Hz is going to be the limit for most people (after all, a majority of movies are still shot at 24 FPS).

    >You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.

    I can see your logic, but what I see is this:
    1) Low resolution test is CPU bound: At several hundred FPS on some of these tests they are not CPU bound, and the few percent difference is no real difference.
    2) Predictor of future performance: Probably not. Future games if they are going to push the CPU will use a) even more GPU offloading (e.g. ray-tracing, physics modeling), b) use more CPUs in parallel, c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration). IOW, you're benchmark isn't measuring the right "thing", and you can't know what the right thing is until it happens.
  • mode_13h - Thursday, July 7, 2022 - link

    > You would need to do some research (blind A-B testing) to see what people can actually detect.

    Obviously not going to happen, on a site like this. Furthermore, readers have their own opinions of what framerates they want and trying to convince them otherwise is probably a thankless errand.

    > I suspect that anything above 60 Hz is going to be the limit for most people
    > (after all, a majority of movies are still shot at 24 FPS).

    I can tell you from personal experience this isn't true. But, it's also not an absolute. You can't divorce the refresh rate from other properties of the display, like whether the pixel illumination is fixed or strobed.

    BTW, 24 fps movies look horrible to me. 24 fps is something they settled on way back when film was heavy, bulky, and expensive. And digital cinema cameras are quite likely used at higher framerates, if only so they can avoid judder when re-targeting to 30 or 60 Hz targets.

    > At several hundred FPS on some of these tests they are not CPU bound,

    When different CPUs produce different framerates with the same GPU, then you know the CPU is a limiting factor to some degree.

    > and the few percent difference is no real difference.

    The point of benchmarking is to quantify performance. If the difference is only a few percent, then so be it. We need data in order to tell us that. Without actually testing, we wouldn't know.

    > Predictor of future performance: Probably not.

    That's a pretty bold prediction. I say: do the testing, report the data, and let people decide for themselves whether they think they'll need more CPU headroom for future games.

    > Future games if they are going to push the CPU will use
    > a) even more GPU offloading (e.g. ray-tracing, physics modeling),

    With the exception of ray-tracing, which can *only* be done on the GPU, then why do you think games aren't already offloading as much as possible to the GPU?

    > b) use more CPUs in parallel

    That starts to get a bit tricky, as you have increasing numbers of cores. The more you try to divide up the work involved in rendering a frame, the more overhead you incur. Contrast that to a CPU with faster single-thread performance, and you know all of that additional performance will end up reducing the CPU portion of frame preparation. So, as nice as parallelism is, there are practical challenges when trying to scale up realtime tasks to use ever increasing numbers of cores.

    > c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration).

    Okay, but if you're buying a CPU today that you want to use for several years, you need to decide which is best from the available choices. Even if future CPUs have those features and future games can use them, that doesn't help me while I'm still using the CPU I bought today. And games will continue to work on "legacy" CPUs for a long time.

    > IOW, you're benchmark isn't measuring the right "thing",
    > and you can't know what the right thing is until it happens.

    Let's be clear: it's not *my* benchmark. I'm just a reader.

    Also, video games aren't new and the gaming scene changes somewhat incrementally, especially given how many years it now takes to develop them. So, tests done today should have similar relevance in the next few years as what test from a few years ago would tell us about gaming performance today.

    I'll grant you that it would be nice to have data to support this: if someone would re-benchmark modern games with older CPUs and compare the results from those benchmarks to ones takes when the CPUs first launched.

Log in

Don't have an account? Sign up now