The Cortex-X2: More Performance, Deeper OoO

We first start off with the Cortex-X2, successor to last year’s Cortex-X1. The X1 marked the first in a new IP line-up from Arm which diverged its “big” core offering into two different IP lines, with the Cortex-A sibling continuing Arm’s original design philosophy of PPA, while the X-cores are allowed to grow in size and power in order to achieve much higher performance points.

The Cortex-X2 continues this philosophy, and further grows the performance and power gap between it and its “middle” sibling, the Cortex-A710. I also noticed that throughout Arm’s presentation there were a lot more mentions of having the Cortex-X2 being used in larger-screen compute devices and form-factors such as laptops, so it might very well be an indication of the company that some of its customers will be using the X2 more predominantly in such designs for this generation.

From an architectural standpoint the X2 is naturally different from the X1, thanks in large part to its support for Armv9 and all of the security and related ISA platform advancements that come with the new re-baselining of the architecture.

As noted in the introduction, the Cortex-X2 is also a 64-bit only core which only supports AArch64 execution, even in PL0 user mode applications. From a microarchitectural standpoint this is interesting as it means Arm will have been able to kick out some cruft in the design. However as the design is a continuation of the Austin family of processors, I do wonder if we’ll see more benefits of this deprecation in future “clean-sheet” big cores designs, where AArch64-only was designed from the get-go. This, in fact, is something that's already happening in other members of Arm's CPU cores, as the new little core Cortex-A510 was designed sans-AArch32.

Starting off with the front-end, in general, Arm has continued to try to improve what it considers the most important aspect of the microarchitecture: branch prediction. This includes continuing to run the branch resolution in a decoupled way from the fetch stages in order to being able to have these functional blocks be able to run ahead of the rest of the core in case of mispredicts and minimize branch bubbles. Arm generally doesn’t like to talk too much details about what exactly they’ve changed here in terms of their predictors, but promises a notable improvement in terms of branch prediction accuracy for the new X2 and A710 cores, effectively reducing the MPKI (Misses per kilo instructions) metric for a very wide range of workloads.

The new core overall reduces its pipeline length from 11 cycles to 10 cycles as Arm has been able to reduce the dispatch stages from 2-cycles to 1-cycle. It’s to be noted that we have to differentiate the pipeline cycles from the mispredict penalties, the latter had already been reduced to 10 cycles in most circumstances in the Cortex-A77 design. Removing a pipeline stage is generally a rather large change, particularly given Arm’s target of maintaining frequency capabilities of the core. This design change did incur some more complex engineering and had area and power costs; but despite that, as Arm explains in, cutting a pipeline stage still offered a larger return-on-investment when it came to the performance benefits, and was thus very much worth it.

The core also increases its out-of-order capabilities, increasing the ROB (reorder buffer) by 30% from 224 entries to 288 entries this generation. The effective figure is actually a little bit higher still, as in cases of compression and instruction bundling there are essentially more than 288 entries being stored. Arm says there’s also more instruction fusion cases being facilitated this generation.

On the back-end of the core, the big new change is on the part of the FP/ASIMD pipelines which are now SVE2-capable. In the mobile space, the SVE vector length will continue to be 128b and essentially the new X2 core features similar throughput characteristics to the X1’s 4x FP/NEON pipelines. The choice of 128b vectors instead of something higher is due to the requirement to have homogenous architectural feature-sets amongst big.LITTLE designs as you cannot mix different vector length microarchitectures in the same SoC in a seamless fashion.

On the back-end, the Cortex-X2 continues to focus on increasing MLP (memory level parallelism) by increasing the load-store windows and structure sizes by 33%. Arm here employs several structures and generally doesn’t go into detail about exactly which queues have been extended, but once we get our hands on X2 systems we’ll be likely be able to measure this. The L1 dTLB has grown from 40 entries to 48 entries, and as with every generation, Arm has also improved their prefetchers, increasing accuracies and coverage.

One prefetcher that surprised us in the Cortex-X1 and A78 earlier this year when we first tested new generation devices was a temporal prefetcher – the first of its kind that we’re aware of in the industry. This is able to latch onto arbitrary repeated memory patterns and recognize new iterations in memory accesses, being able to smartly prefetch the whole pattern up to a certain depth (we estimate a 32-64MB window). Arm states that this coverage is now further increased, as well as the accuracy – though again the details we’ll only able to see once we get our hands on silicon.

In terms of IPC improvements, this year’s figures are quoted to reach +16% in SPECint2006 at ISO frequency. The issue with this metric (and which applies to all of Arm’s figures today) is that Arm is comparing an 8MB L3 cache design to a 4MB L3 design, so I expect a larger chunk of that +16% figure to be due to the larger cache rather than the core IPC improvements themselves.

For their part, Arm is reiterating that they're expecting 8MB L3 designs for next year’s X2 SoCs – and thus this +16% figure is realistic and is what users should see in actual implementations. But with that said, we had the same discussion last year in regards to Arm expecting 8MB L3 caches for X1 SoCs, which didn't happen for either the Exynos 2100 nor the Snapdragon 888. So we'll just have to wait and see what cache sizes the flagship commercial SoCs end up going with.

In terms of the performance and power curve, the new X2 core extends itself ahead of the X1 curve in both metrics. The +16% performance figure in terms of the peak performance points, though it does come at a cost of higher power consumption.

Generally, this is a bit worrying in context of what we’re seeing in the market right now when it comes to process node choices from vendors. We’ve seen that Samsung’s 5LPE node used by Qualcomm and S.LSI in the Snapdragon 888 and Exynos 2100 has under-delivered in terms of performance and power efficiency, and I generally consider both big cores' power consumption to be at a higher bound limit when it comes to thermals. I expect Qualcomm to stick with Samsung foundry in the next generation, so I am admittedly pessimistic in regards to power improvements in whichever node the next flagship SoCs come in (be it 5LPP or 4LPP). It could well be plausible that we wouldn’t see the full +16% improvement in actual SoCs next year.

2022 Generation: Moving Towards Armv9 The Cortex-A710: More Performance with More Efficiency
Comments Locked

181 Comments

View All Comments

  • Ppietra - Tuesday, May 25, 2021 - link

    I believe that he was talking about the overall SPEC2006 score and not just SPECint. Still he would be wrong about the X1 score, which would be 50 and not 40 (probably a typo).
    Anyway a 16% improvement for X2 over X1 would mean a score of 58 which, like he said, would still be behind the A13 performance core and well behind the 72 score for the A14.
    X1 is already being manufactured at 5nm, so it makes no sense to factor in a transition from 7nm.
  • Wilco1 - Tuesday, May 25, 2021 - link

    Cortex-X1 can reach 3.2GHz in Samsung's 5nm process but the power is too high: https://images.anandtech.com/doci/16463/2100-volta...

    TSMC 5nm is faster and lower power, which allows for higher frequencies. At a conservative 3.3GHz X2 would have a combined score of ~66.7 (only 7% slower than A14).
  • Ppietra - Tuesday, May 25, 2021 - link

    That is not how it works!
    First of all you have no idea what would be the advantage from using TSMC instead of Samsung, so you are just throwing numbers with no substance. Secondly, X1 energy consumption is already very high (it is less efficient than the A14 Firestorm core), so no, there doesn’t seem to be a lot of room to improve X2 clock speed to 3.3GHz. Thirdly even with your assumption you would still have X2 performing worse than a 1 year old core
  • Wilco1 - Tuesday, May 25, 2021 - link

    We absolutely do know. TSMC 5nm is ~15% faster than 7nm at the same power (or 30% lower power at the same frequency). We know that SD865+ achieves 3.1GHz on 7nm and that the frequency gain from A13 on 7nm to A14 on 5nm was around 13%. So 3.3GHz should be feasible on 5nm without increasing power.

    The point is that TSMC 5nm will give a significant perf/power boost (that A14 already benefits from). And that means the gap has narrowed to only one generation rather than 2.
  • melgross - Tuesday, May 25, 2021 - link

    It’s not that simple. The cores would require a bit of a redesign for the different process, and each design would fare differently. Some might get a good boost, and others may not.
  • michael2k - Tuesday, May 25, 2021 - link

    You're comparing the X2 to the A14? I mean, if we're lucky we will see the X2 in 2022 alongside the A16. The A15 will be released this year, in 2021. We already have some X1 baselines:
    https://www.anandtech.com/show/16463/snapdragon-88...

    So in terms of generation:
    2021 X1 not competitive with the 2019 A13 now
    2021 X1 competitive with the 2019 A13 on TSMC 5nm
    2021 X1 not competitive with the 2021 A15 (est 10% boost to hit 70 SPECint)
    2022 X2 competitive with the 2020 A14 on TSMC 5nm
    2022 X2 not competitive with the 2021 A15

    That still sounds like a 2 generation gap to me. The real problem isn't fundamentally the core, but the OEM choosing not to use a 2x2 design (2 X1 and 2 A77) or (2 X2 and 2 A710), so even if the cores get faster each generation, overall performance is hobbled by using 3 medium cores instead of a pair of higher performance X1 or X2 cores.
  • Fulljack - Wednesday, May 26, 2021 - link

    it's cat and mouse, really. Apple release their phones in late Q3, while Samsung S-series are released in late Q1. there's 5 to 6 month difference.
  • Ppietra - Wednesday, May 26, 2021 - link

    Nothing of what you said gives you any data to infer about a transition from Samsung to TSMC.
    SD865+ does not use a X1 core, as such you have no commonality to make that kind jump in analysis, secondly the X1 core already consumes significantly more than the SD865+ core, so clearly there is no much room to increase clock speed from that perspective. If you want to increase clock speed you need to keep power consumption under control.
  • Wilco1 - Wednesday, May 26, 2021 - link

    These are different generations of the same microarchitecture from the same design team with the same frequency capability (as reported by AnandTech). So yes there is obvious commonality.

    We also know this microarchitecture is capable of higher frequencies, for example AnandTech reports Cortex-X1 can reach 3.2GHz. The main problem is power however, which is what limited Cortex-X1 on Samsung's process. TSMC 5nm reduces power by 30% which enables higher clock speeds.
  • Ppietra - Wednesday, May 26, 2021 - link

    actually they aren’t different generations from the same microarchitecture. The next generation for the A77 is the A78. The X1 goes for a bigger core design, and as such consumes more.
    Being capable of higher frequencies doesn't mean that Qualcomm (, etc) finds it viable to use those higher frequencies in a smartphone SoC...
    NODE power reduction is stated for same performance and microarchitecture (which X1 is not) and only as an internal TSMC comparison... The data you give tells you nothing about X1 (already at 5nm) transitioning to TSMC. You are making an analysis based on wrong assumptions.

Log in

Don't have an account? Sign up now