Last week, we published our AMD 2nd Gen Ryzen Deep Dive, covering our testing and analysis of the latest generation of processors to come out from AMD. Highlights of the new products included better cache latencies, faster memory support, an increase in IPC, an overall performance gain over the first generation products, new power management methods for turbo frequencies, and very competitive pricing.

In our review, we had a change in some of the testing. The big differences in our testing for this review was two-fold: the jump from Windows 10 Pro RS2 to Windows 10 Pro RS3, and the inclusion of the Spectre and Meltdown patches to mitigate the potential security issues. These patches are still being rolled out by motherboard manufacturers, with the latest platforms being first in that queue. For our review, we tested the new processors with the latest OS updates and microcode updates, as well as re-testing the Intel Coffee Lake processors as well. Due to time restrictions, the older Ryzen 1000-series results were used.

Due to the tight deadline of our testing and results, we pushed both our CPU and gaming tests live without as much formal analysis as we typically like to do. All the parts were competitive, however it quickly became clear that some of our results were not aligned with those from other media. Initially we were under the impression that this was as a result of the Spectre and Meltdown (or Smeltdown) updates, as we were one of the few media outlets to go back and perform retesting under the new standard.

Nonetheless, we decided to take an extensive internal audit of our testing to ensure that our results were accurate and completely reproducible. Or, failing that, understanding why our results differed. No stone was left un-turned: hardware, software, firmware, tweaks, and code. As a result of that process we believe we have found the reason for our testing being so different from the results of others, and interestingly it opened a sizable can of worms we were not expecting.

An extract from our Power testing script

What our testing identified is that the source of the issue is actually down to timers. Windows uses timers for many things, such as synchronization or ensuring linearity, and there are sets of software relating to monitoring and overclocking that require the timer with the most granularity - specifically they often require the High Precision Event Timer (HPET). HPET is very important, especially when it comes to determining if 'one second' of PC time is the equivalent to 'one second' of real-world time - the way that Windows 8 and Windows 10 implements their timing strategy, compared to Windows 7, means that in rare circumstances the system time can be liable to clock shift over time. This is often highly dependent on how the motherboard manufacturer implements certain settings. HPET is a motherboard-level timer that, as the name implies, offers a very high level of timer precision beyond what other PC timers can provide, and can mitigate this issue. This timer has been shipping in PCs for over a decade, and under normal circumstances it should not be anything but a boon to Windows.

However, it sadly appears that reality diverges from theory – sometimes extensively so – and that our CPU benchmarks for the Ryzen 2000-series review were caught in the middle. Instead of being a benefit to testing, what our investigation found is that when HPET is forced as the sole system timer, it can  sometimes a hindrance to system performance, particularly gaming performance. Worse, because HPET is implemented differently on different platforms, the actual impact of enabling it isn't even consistent across vendors. Meaning that the effects of using HPET can vary from system to system, as well as the implementation.

And that brings us to the state HPET, our Ryzen 2000-series review, and CPU benchmarking in general. As we'll cover in the next few pages, HPET plays a very necessary and often very beneficial role in system timer accuracy; a role important enough that it's not desirable to completely disable HPET – and indeed in many systems this isn't even possible – all the while certain classes of software such as overclocking & monitoring software may even require it. However for a few different reasons it can also be a drain on system performance, and as a result HPET shouldn't always be used. So let's dive into the subject of hardware timers, precision, Smeltdown, and how it all came together to make a perfect storm of volatility for our Ryzen 2000-series review.

A Timely Re-Discovery
Comments Locked


View All Comments

  • eva02langley - Thursday, April 26, 2018 - link

    They didn't got it wrong, they simply used default setting for default systems. Even Intel told them to leave HPET on.
  • eddman - Thursday, April 26, 2018 - link

    No, Ian was forcing HPET to be used in the benches. He didn't use the default state. Intel did not tell them to FORCE HPET to be used either.
  • peevee - Thursday, April 26, 2018 - link

    They did got it wrong. 100% their fault, with the stupid excuse that their background is in overclocking.
  • rocky12345 - Wednesday, April 25, 2018 - link

    Good to see things cleared up on this. My question is this I under stand that on the AMD's systems turn HPET to forced on from Ryzen Master needing it am I right on that. So that explains why it was turned on for the AMD systems but if it was not at default for the Intel systems as well how or what changed it to forced on the Intel systems? Was it changed in the Intel bios to enabled which then forced the OS to use the forced on option. My other concern is that if it eats away at so much performance why havn't Intel and AMD come up with better ways to deal with this issue or is it kinda like a newer problem because of Spectre/Meltdown patches and micro code updates on the Intel platform and HPET in forced mode kills performance because of that.
  • johnsmith222 - Wednesday, April 25, 2018 - link

    They've forced HPET on in benchmarks via script (as I understand from article) and for AMD it is irrelevant be it on or off (also explained in the article).
  • rocky12345 - Wednesday, April 25, 2018 - link

    So basically the moral of the story here is leave things as the hardware vendor intended or in default settings and everything should be fine. This does raise about a million more questions on how reviewers should or even need to change the way they setup the gear for reviewing etc etc. It also confirms well this and probably a lot of other variables in the hardware that can skew results one way or another it answers the question or at least part of it as to why the same hardware performs so differently from review to review. Just for the record I am not saying Anandtech in any way tried to skew the numbers in anyway I am very sure that is not the case here.
  • Maxiking - Wednesday, April 25, 2018 - link

    Well, if you have been enforcing HPET on for all those years, it pretty much means that all the tests on this site are not valid and not representative at all.

    HPET is widly known as the reason causing several perfomance issues /stuttering, fps drops on cpus with more cores/ but I never personally believed it because there was no benchmarks to support it only some geeks on forums posting their latency screens with HPET on/off and anecdotal evidence from the people who allegedly gained/lost fps by turning it on/off.

    The point is..The benchmarks here are not run on the same stick of RAMS /frequencies, timings/ but the highest official supported frequency is used to simulate what the platform is capable of.

    So why turning/enforcing something on by default if it could potentionally cause performance regression and makes your avg, min, max, 99th percentile absolutelly skewed?
  • peevee - Thursday, April 26, 2018 - link

  • mapesdhs - Sunday, May 6, 2018 - link

    This what? Lost me there.

    Btw, some older benchmarks don't work with HPET off (I think 3DMark Vantage is one of them).
  • lefenzy - Wednesday, April 25, 2018 - link

    I'm pretty confused. In these cases, it's the denominator (time) that's changing that affects the resultant performance assessment right? The raw performance (numerator) is unchanged.

    e.g. FPS = frames / time. Frames remain the same, but time is measured differently.

Log in

Don't have an account? Sign up now