Power Consumption: Hot Hot HOT

I won’t rehash the full ongoing issue with how companies report power vs TDP in this review – we’ve covered it a number of times before. But in a quick sentence, Intel uses one published value for sustained performance, and an unpublished ‘recommended’ value for turbo performance, the latter of which is routinely ignored by motherboard manufacturers. Most high-end consumer motherboards ignore the sustained value, often 125 W, and allow the CPU to consume as much as it needs with the real limits being the full power consumption at full turbo, the thermals, or the power delivery limitations.

One of the dimensions of this we don’t often talk about is that the power consumption of a processor is always dependent on the actual instructions running through the core.  A core can be ‘100%’ active while sitting around waiting for data from memory or doing simple addition, however a core has multiple ways to run instructions in parallel, with the most complex instructions consuming the most power. This was noticeable in the desktop consumer space when Intel introduced vector extensions, AVX, to its processor design. The concurrent introduction of AVX2, and AVX-512, means that running these instructions draws the most power.

AVX-512 comes with its own discussion, because even going into an ‘AVX-512’ mode causes additional issues. Intel’s introduction of AVX-512 on its server processors showcased that in order to remain stable, the core had to reduce the frequency and increase the voltage while also pausing the core to enter the special AVX-512 power mode. This made the advantage of AVX-512 suitable only for strong high-performance server code. But now Intel has enabled AVX-512 across its product line, from notebook to enterprise, allowing these chips to run AI code faster and enabling a new use cases. We’re also a couple of generations on from then, and AVX-512 doesn’t get quite the same hit as it did, but it still requires a lot of power.

For our power benchmarks, we’ve taken several tests that represent a real-world compute workload, a strong AVX2 workload, and a strong AVX-512 workload. Note that Intel lists the Core i7-11700K as a 125 W processor.

Motherboard 1: Microcode 0x2C

Our first test using Agisoft Photoscan 1.3 shows a peak power consumption around 180 W, although depending on the part of the test, we have sustained periods at 155 W and 130 W. Peak temperatures flutter with 70ºC, but it spends most of the time at around the 60ºC mark.

For the AVX2 workload, we enable POV-Ray. This is the workload on which we saw the previous generation 10-core processors exceed 260 W.

At idle, the CPU is consuming under 20 W while touching 30ºC. When the workload kicks in after 200 seconds or so, the power consumption rises very quickly to the 200-225 W band. This motherboard implements the ‘infinite turbo’ strategy, and so we get a sustained 200-225 W for over 10 minutes. Through this time, our CPU peaks at 81ºC, which is fairly reasonable for some of the best air cooling on the market. During this test, a sustained 4.6 GHz was on all cores.

Our AVX-512 workload is 3DPM. This is a custom in-house test, accelerated to AVX2 and AVX512 by an ex-Intel HPC guru several years ago (for disclosure, AMD has a copy of the code, but hasn’t suggested any changes).

This tests for 10-15 seconds and then idles for 10 seconds, and does rapidly go through any system that doesn’t run an infinite turbo. What we see here in this power only graph is the alarming peaks of 290-292 W. Looking at our data, the all-core turbo under AVX-512 is 4.6 GHz, sometimes dipping to 4.5 GHz. Ouch. But that’s not all.

Our temperature graph looks quite drastic. Within a second of running AVX-512 code, we are in the high 90ºC, or in some cases, 100ºC. Our temperatures peak at 104ºC, and here’s where we get into a discussion about thermal hotspots.

There are a number of ways to report CPU temperature. We can either take the instantaneous value of a singular spot of the silicon while it’s currently going through a high-current density event, like compute, or we can consider the CPU as a whole with all of its thermal sensors. While the overall CPU might accept operating temperatures of 105ºC, individual elements of the core might actually reach 125ºC instantaneously. So what is the correct value, and what is safe?

The cooler we’re using on this test is arguably the best air cooling on the market – a 1.8 kilogram full copper ThermalRight Ultra Extreme, paired with a 170 CFM high static pressure fan from Silverstone. This cooler has been used for Intel’s 10-core and 18-core high-end desktop variants over the years, even the ones with AVX-512, and not skipped a beat. Because we’re seeing 104ºC here, are we failing in some way?

Another issue we’re coming across with new processor technology is the ability to effectively cool a processor. I’m not talking about cooling the processor as a whole, but more for those hot spots of intense current density. We are going to get to a point where can’t remove the thermal energy fast enough, or with this design, we might be there already.

Smaller Packaging

I will point out an interesting fact down this line of thinking though, which might go un-noticed by the rest of the press – Intel has reduced the total vertical height of the new Rocket Lake processors.

The z-height, or total vertical height, of the previous Comet Lake generation was 4.48-4.54 mm. This number was taken from a range of 7 CPUs I had to hand. However, this Rocket Lake processor is over 0.1 mm thinner, at 4.36 mm. The smaller height of the package plus heatspreader could be a small indicator to the required thermal performance, especially if the airgap (filled with solder) between the die and the heatspreader is smaller. If it aids cooling and doesn’t disturb how coolers fit, then great, however at some point in the future we might have to consider different, better, or more efficient ways to remove these thermal hotspots.

Motherboard 2: Microcode 0x34

As an addendum to this review a week after our original numbers, we obtained a second motherboard that offered a newer microcode version from Intel.

On this motherboard, the AVX-512 response was different enough to warrant mentioning. Rather than enable a 4.6 GHz all-core turbo for AVX-512, it initially ramped up that high, peaking at 276 W, before reducing down to 4.4 GHz all-core, down to 225 W. This is quite a substantial change in behaviour:

This means that at 4.4 GHz, we are running 200 MHz slower (which gives a 3% performance decrease), but we are saving 60-70 W. This is indicative of how far away from the peak efficiency point that these processors are.

There was hope that this will adjust the temperature curve a little. Unfortunately we still see peaks at 103ºC when AVX-512 is first initiated, however during the 4.4 GHz time scale we are more akin to 90ºC, which is far more palatable.

On AVX2 workloads with the new 0x34 microcode, the results were very similar to the 0x2C microcode. The workload ran at 4.6 GHz all-core, reached a peak power of 214 W, and the processor temperature was sustained around 82ºC.

Peak Power Comparison

For completeness, here is our peak power consumption graph. These are the peak power consumption numbers taken from a series of benchmarks on which we run our power monitoring tools.

(0-0) Peak Power

Intel Core i7-11700K Review CPU Tests: Microbenchmarks
Comments Locked

541 Comments

View All Comments

  • zzzxtreme - Sunday, March 7, 2021 - link

    I wished you would have tested the XE graphics
  • Fman4 - Monday, March 8, 2021 - link

    Am I the only one find that OP plugged 4 RAMs on an X570 ITX motherboard?
  • Fman4 - Monday, March 8, 2021 - link

    @Dr. Ian Cutress
  • zodiacfml - Monday, March 8, 2021 - link

    bored. just here to say this is unsurprising though this strongly reminds me of the time where AMD is releasing new, well designed CPUs but two process node generations behind intel. I think AMD was 32nm and 28nm while Intel is 22 and 14nm. most comments were really harsh with AMD but I reasoned that it is simply due to the manufacturing superiority of Intel
  • blppt - Monday, March 8, 2021 - link

    Bulldozer and Piledriver are not the examples I would put up for "well designed".
  • GeoffreyA - Tuesday, March 9, 2021 - link

    Still, within that mess, AMD did a pretty good job raising Bulldozer's IPC and cutting down its power each generation. But the foundation being fatally flawed, it was hopeless. I believe it taught them a lot about cutting power and so on, and when they poured that into Zen, we saw the result. Bulldozer was a fantastic training ground, if one looks at it humorously.
  • Oxford Guy - Tuesday, March 9, 2021 - link

    No, AMD did an extremely poor job.

    Firstly, Bulldozer had worse IPC than Phenom. No engineers with brains release a CPU to replace the entire line while giving it worse IPC. The trap of going for high clocks was a lesson shown to the entire industry via Netburst. AMD's engineers knew all about it, yet someone at the company decided to try Netburst 2.0.

    Secondly, AMD was so sloppy and lazy that Piledriver shipped with a performance regression in AVX. It was worse to use AVX than to not use it. How incredibly incompetent can the company have been? It doesn't take a high IQ to understand that one doesn't ship broken AVX.

    AMD then refused to replace Piledriver until Zen came out. It tinkered half-heartedly with APU rubbish and focused on pushing junk like Jaguar.

    While it's true that the extreme failure of AMD (the construction core line) is due, to a large degree, to Intel abusing its monopoly to starve AMD of customers and cash — cash it needed to do R&D, one does not release a new chip with worse IPC and then very shortly after break AVX and refuse to stop feeding that junk to customers for many years. Just tinkering with Phenom would have been better (Phenom 3).

    As for the foundation claim... we have no idea how well the CMT concept could have worked out with competent engineering. Remember, they literally broke AVX in the Piledriver revision that was supposed to fix Bulldozer enough to make it sellable. Operations caching could have been stronger. The L3 cache was almost as slow as main memory. The RAM controller was weak, just like Phenom's. Etc.

    We paid for Intel's monopoly and we're still paying today. Only its monopoly and the lack of adequate competition is enabling the company to be so profitable despite failing so badly. Relying on two companies (or one 1/2, when it comes to R&D money ratio and other factors) to deliver adequate competition doesn't work.

    Google and Microsoft = Google owns the clearnet. Apparently, they have some sort of cooperation agreement which helps to explain why Bing has such a tiny index and such a poor-quality search.

    TSMC and Samsung = Can't meet demand.

    AMD and Nvidia = Nvidia keeps breaking profit records while utterly failing to meet demand. Both companies refuse to stop making their cards attractive for mining and have for a long long time. AMD refused to adequately compete beyond the lower midrange (Polaris forever, or you can buy a 'console'!) for a long time, leaving us to pay through the nose for Nvidia's prices. AMD literally competes against the PC market by pushing the console scam. Consoles are gaming PCs in disguise and they're parasitic in multiple ways, including in terms of wafer allocations. AMD's many many years of refusal to compete with Nvidia beyond the Polaris price point caused so much pent-up demand and now the company can enjoy the artificially high price points from that. It let Nvidia keep raising prices to get consumers used to that. Now that it has finally been forced to improve the 'consoles' beyond the garbage-tier Jaguar CPU it has to offer a bit more value to the PC gaming market. And so, after all these years, we have something decent that one can't buy. I can go on about this so-called competition but why bother. People will go to the most extravagant lengths to excuse the problem of lack of adequate competition — like the person who recently said it's easier to create Google's empire from scratch than it is to make a competitive GPU and sell it as a third GPU company.

    There are plenty of other areas in tech with inadequate competition, too.
  • blppt - Tuesday, March 9, 2021 - link

    "AMD then refused to replace Piledriver until Zen came out. It tinkered half-heartedly with APU rubbish and focused on pushing junk like Jaguar."

    To be fair, AMD had put a LOT of time, money and effort into Bulldozer/Piledriver, and were never a company with bottomless wells of cash to toss an architecture out immediately. Plus, Zen took a long time to design and finalize---thankfully, they made literally ALL the right moves in designing it, including hiring the brilliant Jim Keller.

    I think if Zen had been another BD like failure, that would have been the almost the end of AMD in the cpu market (leaving them basically as ATI was) The consoles likely would have gone with Intel or ARM for their next iteration. AMD once again spent tons of money that they don't have as disposable income in designing Zen. Two failures in a row would have been disastrous.

    Heck, the consoles might go with their own custom ARM design for PS6/Xbox(whatever) anyways.
  • GeoffreyA - Wednesday, March 10, 2021 - link

    blppt. Agreed, that would have been the end of AMD.
  • Oxford Guy - Wednesday, March 10, 2021 - link

    AMD did not put a lot of resources into fixing Bulldozer.

    It shipped Piledriver with broken AVX and never bothered to replace Piledriver on the desktop until Zen.

    Inexcusable. It shipped Steamroller and Excavator in cost-cut mode, cutting cores, cutting clocks, cutting the socket standards, and cutting cache. It used a dense library to save money by keeping the die small and used the inferior 28nm bulk process.

    Pathetic in basically every respect.

Log in

Don't have an account? Sign up now