Forcing HPET On, Plus Spectre and Meltdown Patches

Based on my extreme overclocking roots back in the day, my automated benchmark scripts for the past year or so have forced HPET through the OS. Given that AMD’s guidance is now that it doesn’t matter for performance, and Intel hasn’t even mentioned the issue relating to a CPU review, having HPET enabled was the immediate way to ensure that every benchmark result was consistent, and would not be interfered with by clock drift on special motherboard manufacturer in-OS tweaks. This was a fundamental part of my overclocking roots – if I want to test a CPU, I want to make certainly sure that the motherboard is not causing any issues. It really gets up my nose when after a series of CPU testing, it turns out that the motherboard had an issue – keeping HPET on was designed to stop any timing issues should they arise.

From our results over that time, if HPET was having any effect, it was unnoticed: our results were broadly similar to others, and each of the products fell in line with where they were expected. Over the several review cycles we had, there were a couple of issues that cropped up that we couldn’t explain, such as our Skylake-X gaming numbers that were low, or the first batch of Ryzen gaming tests, where the data was thrown out for being obviously wrong however we never managed to narrow down the issue.

Enter our Ryzen 2000 series numbers in the review last week, and what had changed was the order of results. The way that forcing HPET was affecting results was seemingly adjusted when we bundle in the Spectre and Meltdown patches that also come with their own performance decrease on some systems. Pulling one set of results down further than expected started some alarm bells and needed closer examination.

HPET, by the way it is invoked, is programmed by a memory mapped IO window through the ACPI into the circuit found on the chipset. Accessing it is very much an IO command, and one of the types of commands that fall under the realm of those affected by the Spectre and Meltdown patches. This would imply that any software that required HPET access (or all timing software if HPET is forced) would have the performance reduced even further when these patches are applied, further compounding the issue.

It Affects AMD and Intel Differently: Productivity

So far we have done some quick initial re-testing on the two key processors in this debate, the Ryzen 7 2700X and Intel Core i7-8700K. These are the two most talked about processors at this time, due to the fact that they are closely matched in performance and price, with each one having benefits in certain areas over the other. For our new tests, we have enabled the Spectre/Meltdown patches on both systems – HPET is ‘on’ in the BIOS, but left as ‘default’ in the operating system.

For our productivity tests, on the Intel system, there was an overall +3.3% gain when un-forcing HPET in the OS:

The biggest gains here were in the web tests, a couple of the renderers, WinRAR (memory bound), and PCMark 10. Everything else was pretty much identical. Our compile tests gave us three very odd consecutive numbers, so we are looking at those results separately.

On the AMD system, the productivity tests difference was an overall +0.3% gain when un-forcing HPET in the OS:

This is a lower gain, with the biggest rise coming from PCMark10’s video conference test to the tune of +16%. The compile test results were identical, and a lot of tests were with 1-2%.

If Affects AMD and Intel Differently: Gaming

The bigger changes happen with the gaming results, which is the reason why we embarked on this audit to decipher our initial results. Games rely on timers to ensure data and pacing and tick rates are all sufficient for frames to be delivered in the correct manner – the balance here is between waiting on timers to make sure everything is correct, or merely processing the data and hoping it comes out in more or less the right order: having too fine a control might cause performance delays. In fact, this is what we observe.

With our GTX 1080 and AMD’s Ryzen 7 2700X, we saw minor gains across the board, however it was clear that 1080p was the main beneficiary over 4K. The 10%+ adjustments came in only Civilization 6 and Rise of the Tomb Raider.

Including the 99th percentile data, removing HPET gave an overall boost of around 4%, however the most gains were limited to specific titles at the smaller resolutions, which would be important for any user relying on fast frame rates at lower resolutions.

The Intel side of the equation is where it gets particularly messy. We rechecked these results several times, but the data was quite clear.

As with the AMD results, the biggest beneficiaries of disabling HPET were the 1080p tests. Civilization 6 and Rise of the Tomb Raider had substantial performance boosts (also in 4K testing), with Grand Theft Auto observing an additional +27%. By comparison, Shadow of Morder was ‘only’ +6%.

Given that the difference between the two sets of data is related to the timer, one could postulate that the more granular the timer, the more the effect it can have: on both of our systems, the QPC timer is set for 3.61 MHz as a baseline, but the HPET frequencies are quite different. The AMD system has a HPET timer at 14.32 MHz (~4x), while the Intel system has a HPET timer at 24.00 MHz (~6.6x). It is clear that the higher granularity of the Intel timer is causing substantially more pipeline delays – moving from a tick-to-tick delay of 277 nanoseconds to 70 nanoseconds to 41.7 nanoseconds is crossing the boundary from being slower than a CPU-to-DRAM access to almost encroaching on a CPU-to-L3 cache access, which could be one of the reasons for the results we are seeing, along with the nature of how the HPET timer works.

There is also another aspect to gaming that does not appear with standard CPU tests: depending on how the engine is programmed, some game developers like to keep track of a lot of the functions in flight in order to either adjust features on the fly, or for internal metrics. For anyone that has worked extensively on a debug mode and had to churn through the output, it is basically this. If a title had shipped with a number of those internal metrics still running in the background, this is exactly the sort of issue that having HPET enabled could stumble upon - if there is a timing mismatch (based on the way HPET works) and delays are introduced due to these mismatches, it could easily slow down the system and reduce the frame rate.

AMD and Intel Have Different HPET Guidance Why This Matters
Comments Locked

242 Comments

View All Comments

  • Cooe - Wednesday, April 25, 2018 - link

    Chris Hook was a marketing guy through and through and was behind some of AMD's worst marketing campaigns in the history of the company. Him leaving is total non-issue in my eyes and potentially even a plus assuming they can replace him with someone that can actually run good marketing. That's always been one of AMD's most glaring weak spots.
  • HilbertSpace - Wednesday, April 25, 2018 - link

    Thanks for the great follow up article. Very informative.
  • Aichon - Wednesday, April 25, 2018 - link

    I laud with your decision to reflect default settings going forward, since the purpose of these reviews is to give your reader a sense of how these chips compare to each other in various forms of real-world usage.

    As to the closing question of how these settings should be reflected to readers, I think the ideal case (read: way more work than I'm actually expecting you to do) would be that you extend the Benchmarking Setup page in future reviews to include mention of any non-default settings you use, with details about which setting you chose, why you set it that way, and, optionally, why someone might want to set it differently, as well as how it might impact them. Of course, that's a LOAD of work, and, frankly, a lot of how it might impact other users in unknown workflows would be speculation, so what you end up doing should likely be less than that. But doing it that way would give us that information if we want it, would tell us how our usage might differ from yours, and, for any of us who don't want that information, would make it easy to skip past.
  • phoenix_rizzen - Wednesday, April 25, 2018 - link

    Would be interesting to see a series of comparisons for the Intel CPU:

    No Meltdown, No Spectre, HPET default
    No Meltdown, No Spectre, HPET forced
    Meltdown, No Spectre, HPET default
    Meltdown, No Spectre, HPET forced

    To compare to the existing Meltdown, Spectre, HPET default/forced results.

    Will be interesting to see just what kind of performance impact Meltdown/Spectre fixes really have.

    Obviously, going forward, all benchmarks should be done with full Meltdown/Spectre fixes in place. But it would still be interesting to see the full range of their effects on Intel CPUs.
  • lefty2 - Wednesday, April 25, 2018 - link

    Yes, I'd like to second this suggestion ;) . No one has done any proper analysis of the Meltdown/Spectre performance on Windows since Intel and AMD released the final microcode mitigations. (i.e post April 1st).
  • FreckledTrout - Wednesday, April 25, 2018 - link

    I agree as the timing makes this very curious. One would think this would have popped up before this review. I get this gut feeling the HPET being forced is causing a much greater penalty with the Meltdown and Spectre patches applied.
  • Psycho_McCrazy - Wednesday, April 25, 2018 - link

    Thanks to Ryan and Ian for such a deep dive into the matter and for finding out what the issue was...
    Even though this changes the gaming results a bit, still does not change the fact that the 2700x is a very very competent 4k gamer cpu.
  • Zucker2k - Wednesday, April 25, 2018 - link

    You mean gpu-bottle-necked gaming? Sure!
  • Cooe - Wednesday, April 25, 2018 - link

    But to be honest, the 8700K's advantage when totally CPU limited isn't all that fantastic though either. Sure, there are still a handful of titles that put up notable 10-15% advantages, most are now well in the realm of 0-10%, with many titles now in a near dead heat which compared to the Ryzen 7 vs Kaby Lake launch situation is absolutely nuts. Hell, even when comparing the 1st Gen chips today vs then; the gaps have all shrunk dramatically with no changes in hardware and this slow & steady trend shows no signs of petering out (Zen in particular is an arch design extraordinarily ripe for software level optimizations). Whereas there were a good number of build/use scenerios where Intel was the obviously superior option vs 1st Gen Ryzen, with how much the gap has narrowed those have now shrunk into a tiny handful of rather bizarre niches.

    These being those first & foremost gamers whom use a 1080p 144/240Hz monitor with at least a GTX 1080/Vega 64. For most everyone with more realistic setups like 1080p 60/75Hz with a mid-range card or a high end card paired with 1440p 60/144Hz (or 4K 60Hz), the Intel chip is going to have all of no gaming performance advantage whatsoever, while being slower to a crap ton slower than Ryzen 2 in any sort of multi-tasking scenerio, or decently threaded workload(s). And unlike Ryzen's notable width advantage, Intel's in general single-thread perf is most often near impossible to notice without both systems side by side and a stopwatch in hand, while running a notoriously single-thread heavy load like some serious Photoshop (both are already so fast on a per-core basis that you pretty much deliberately have to seek out situations where there'll be a noticeable difference, whereas AMD's extra cores/threads & superior SMT becomes readily apparent as soon as you start opening & running more and more things concurrently. (All modern OS' are capable of scaling to as many cores/threads as you can find them).

    Just my 2 cents at least. While the i7-8700K was quite compelling for a good number of use-cases vs Ryzen 1, it just.... well isn't vs Ryzen 2.
  • Tropicocity - Monday, April 30, 2018 - link

    The thing is, any gamer (read: gamer!) looking to get a 2700x or an 8700k is very likely to be pairing it with at least a GTX 1070 and more than likely either a 1080/144, a 1444/60, or a 1440/144 monitor. You don't generally spend $330-$350/ £300+ on a CPU as a gamer unless you have sufficient pixel-pushing hardware to match with it.
    Those who are still on 1080/60 would be much more inclined to get more 'budget' options, such as a Ryzen 1400-1600, or an 8350k-8400.

    There is STILL an advantage at 1440p, which these results do not show. At 4k, yes, the bottleneck becomes almost entirely the GPU, as we're not currently at the stage where that resolution is realistically doable for the majority.

    Also, as a gamer, you shouldn't neglect the single-threaded scenario. There are a few games who benefit from extra cores and threads sure, but if you pick the most played games in the world, you'll come to see that the only thing they appreciate is clock speed and single (occasionally dual) threaded workloads. League of Legends, World of Warcraft, Fortnite, CS:GO etc etc.

    The games that are played by more people globally than any other, will see a much better time being played on a Coffee Lake CPU compared to a Ryzen.

    You do lose the extra productivity, you won't be able to stream at 10mbit (Twitch is capped to 6 so its fine), but you Will certainly have improvements when you're playing the game for yourself.

    Don't get me wrong here; I agree that Ryzen 2 vs Coffee Lake is a lot more balanced and much closer in comparison than anything in the past decade in terms of Intel vs AMD, but to say that gamers will see "no performance advantage whatsoever" going with an Intel chip is a little too farfetched.

Log in

Don't have an account? Sign up now