Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full AVX2/AVX512 (delete as applicable) workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

AMD Ryzen Threadripper Pro 3995WX

The specifications for this processor list 64 cores running at a TDP of 280 W. In our testing, we never saw any power consumption over 280 W:

(0-0) Peak Power

Going through our POV-Ray scaling power test for per-core consumption, we’re seeing a trend whereby 40% of the power goes to the non-core operation of the system, which is also likely to include the L3 cache.


Red = Full Package, Blue = CPU Core only (minus L3 we think)

We only hit the peak 280 W when we are at 56-core loading, otherwise it is a steady climb moving from 7 W/core in the early loading down to about 3 W/core when fully loaded. What this does for core frequencies is relatively interesting.

Our system starts around 4200 MHz, which is the rated turbo frequency, settling down to 4000-4050 MHz in that 8-core to 20-core loading. After 20 cores, it’s a slow decline at a rate of 25 MHz per extra core loaded, until at full CPU load we observe 3100 MHz on all cores. This is above the 2700 MHz base frequency, but also comes out to 2.86 W per core in CPU-only power, or 4.37 W per core if we also include non-CPU power. Note that non-CPU power in this case might also include the L3.

For an actual workload, our 3DPMavx test is a bit more aggressive than POV-Ray, cycling to full load for ten seconds for each of its six algorithms then idling for a short time. In this test we saw idle frequencies of 2700 MHz, but all-core loading was at least 2900 MHz up to 3200 MHz. Power again was very much limited to 280 W.

Does 8-Channel Memory Matter? CPU Tests: Rendering
Comments Locked

118 Comments

View All Comments

  • YB1064 - Tuesday, February 9, 2021 - link

    You are kidding, right? Intel has become the poor man's AMD in terms of performance.
  • kgardas - Wednesday, February 10, 2021 - link

    From general computing point of view yes, but from specific point no. Look at 3d particle movement! 3175x with less than half cores, at least $1k cheaper is able to provide more than 2x perf of the best AMD. So if you have something hand optimized for avx512, then old, outdated intel is still able to kicks amd ass and quite with a style.
  • Spunjji - Wednesday, February 10, 2021 - link

    @kgardas - Sure, but not many people can just throw their code at one of only a handful of programmers in the world with that level of knowledge and get optimised code back. That particle movement test isn't an industry-standard thing - it's Ian's personal project, hand-tuned by an ex-Intel engineer. Actual tests using AVX512 aren't quite so impressive because they only ever use it for a fraction of their code.
  • Fulljack - Thursday, February 11, 2021 - link

    not to mention that any processor that run in avx512 will have it's clockspeed tanked. unless your program maximize the use of avx512, the net progress will result slower application than using avx/2 or none at all.
  • sirky004 - Tuesday, February 9, 2021 - link

    what's you deal with AVX 512?
    Usual workload with that in mind is better to offload in GPU.
    There's a reason why Linus Torvald hate that "power virus"
  • kgardas - Wednesday, February 10, 2021 - link

    Usually if you write the code, it's more easier to add few avx512 intrinsic calls then to rewrite the software for GPU offload. But yes, GPU will be faster *if* the perf is not killed by PCIe latency. E.g. you need to interact with data on CPU and perform just few calcs on GPU so moving data cpu -> gpu -> cpu -> loop over, will kill perf.
  • kgardas - Wednesday, February 10, 2021 - link

    AFAIK, Linus hates that avx512 is not available everywhere in x86 world. But this will be the same case with upcoming AMX, so there is nothing intel may do about it. Not sure if AMD will need to pay some money for avx512/amx license or not...
  • Qasar - Wednesday, February 10, 2021 - link

    sorry kgardas but linus HATES avx512:
    https://www.extremetech.com/computing/312673-linus...
    https://www.phoronix.com/scan.php?page=news_item&a...
    "I hope AVX512 dies a painful death, and that Intel starts fixing real problems instead of trying to create magic instructions to then create benchmarks that they can look good on… "
    where you got that he likes it. and chances are, unless intel makes amx available with no issues, amx maybe the same niche as avx 512 is.
  • kgardas - Wednesday, February 10, 2021 - link

    Yes, I know he hates the stuff, but not sure about the right reason. In fact I think AVX512 is best AVX so far ever. I've read some of his rants and it was more about avx512 is not everywhere like avx2 etc. etc. Also Linus was very vocal about his departure from Intel workstation to AMD and since AMD does not provide avx512 yet it may well be just pure engineering laziness -- don't bother me with this stuff, it does not run here. :-)
  • Qasar - Wednesday, February 10, 2021 - link

    i dont think it has to be do with laziness, it has to do with the overall hit you get in performance when you use it, not to mention the power usage, and the die space it needs. from what i have seen, it still seems to be a niche, and over all not worth it. it looks like amd could add avx512 to zen at some point, but maybe, amd has decided it isnt worth it ?

Log in

Don't have an account? Sign up now