Introducing the Confidential Compute Architecture

Over the last few years, we’ve seen security, and security breaches of hardware be at the forefront of news, with many vulnerabilities such as Spectre, Meltdown, and all of their sibling side-channel attacks showcasing that there’s a fundamental need for a re-think of how to approach security. One way Arm wants to address this overarching issue is to re-architect how secure applications work with the introduction of the Arm Confidential Compute Architecture.

Before continuing, I want to warn that today’s disclosures are merely high-level explanations of how the new CCA operates, with Arm saying more details on how exactly the new security mechanism works will be unveiled later this summer.

The goal of the CCA is to more from the current software stack situation where applications which are run on a device have to inherently trust the operating system and the hypervisor they are running on. The traditional model of security is built around the fact that the more privileged tiers of software are allowed to and are able to see into the execution of lower tiers, which can be an issue when the OS or the hypervisor is compromised in any way.

CCA introduces a new concept of dynamically creates “realms”, which can be viewed as secured containerised execution environments that are completely opaque to the OS or hypervisor. The hypervisor would still exist, but be solely responsible for scheduling and resource allocation. The realms instead, would be managed by a new entity called the “realm manager”, which is supposed to be a new piece of code roughly 1/10th the size of a hypervisor.

Applications within a realm would be able to “attest” a realm manager in order to determine that it can be trusted, which isn’t possible with say a traditional hypervisor.

Arm didn’t go into more depth of what exactly creates this separation between the realms and the non-secure world of the OS and hypervisors, but it did sound like hardware backed address spaces which cannot interact with each other.

The advantage of the usage of realms is that it vastly reduces the chain of trust of a given application running on a device, with the OS becoming largely transparent to security issues. Mission-critical applications that require supervisory controls would be able to run on any device as say opposed to today’s situation where corporate or businesses require one to use dedicated devices with authorised software stacks.

Not new to v9 but rather introduced with v8.5, MTE or memory tagging extensions are aimed to help with two of the most persistent security issues in the world’s software. Buffers overflows and use-after-free are continuing software design issues that have been part of software design for the past 50 years, and can take years for them to be identified or resolved. MTE is aimed at helping identify such issues by tagging pointers upon allocation and checking upon use.

Security is to Armv9 is what 64-bit was to Armv8 Future Arm CPU Roadmaps, mention of Raytracing GPUs
POST A COMMENT

74 Comments

View All Comments

  • mdriftmeyer - Thursday, April 1, 2021 - link

    Considering EPYC Genoa is 96 cores /192 threads and will include Xilinx specialty processors for Zen 4 I would have just left that as the comment. Intel's new CEO will ratchet up specialty processing onto future Intel solutions as well. Reply
  • mdriftmeyer - Thursday, April 1, 2021 - link

    Sorry, but that's actually not even remotely close. Just head over to Phoronix and see how bad Milan whips the competition across the board. And yes, Phoronix has a much large process suite of applications than Anandtech. Reply
  • Wilco1 - Friday, April 2, 2021 - link

    Anandtech is one of the few sites that produces accurate benchmark results across different ISAs. SPEC is an industry standard benchmark to compare servers, and I don't see anything like it on Phoronix. Phoronix just runs a bunch of mostly unknown benchmarks without even checking that the results are meaningful across ISAs (they are not in many cases). Quantity does not imply quality. Reply
  • RSAUser - Saturday, April 3, 2021 - link

    Spec is quite flawed, you can go read up on it, it basically only cares about cache and cache latency, it is not an accurate representation of how stuff performs between different architectures.

    It's actually quite difficult to compare between architectures unless you know the specific use case,and Apple has done really well with the interpretation layer and I think dotnet core/5 from MS will also help MS quite a bit with that over the next few years when they start moving a lot of their products to their own architecture.
    Reply
  • Wilco1 - Saturday, April 3, 2021 - link

    SPEC consists of real applications like the GCC compiler. More cache, lower latency memory and higher IPC*frequency give better scores just like any other code. SPEC is not perfect by any means, but it is the best cross-ISA benchmark that exists today.

    What Phoronix does is testing how well code is optimized. If you see x86 being much faster than AArch64 then clearly that code hasn't been optimized for AArch64. SimdJson treated AArch64 as first-class from the start and thus has had similar optimization effort as x86, and you can see that in the results. But that's not the case for many other random projects that are not popular (yet) on AArch64. So Phoronix results are completely useless if you are interested in comparing CPU performance.
    Reply
  • mdriftmeyer - Thursday, April 1, 2021 - link

    Considering EPYC Genoa is 96 cores /192 threads and will include Xilinx specialty processors for Zen 4 I would have just left that as the comment. Intel's new CEO will ratchet up specialty processing onto future Intel solutions as well. Reply
  • Wilco1 - Saturday, April 3, 2021 - link

    Genoa is 2022, Altra Max has 128 cores in 2021. Reply
  • abufrejoval - Wednesday, March 31, 2021 - link

    I just hope they put CCA also in client side SoCs. So far all those 'realm', 'enclave' or VM encryption enhancements have only targeted server-side chips, but I don't think the vendor-favored walled garden approach has much of a future, there is an urgent need for more federation. Reply
  • bobwya - Wednesday, March 31, 2021 - link

    "The benefit of SVE and SVE2 beyond addition various modern SIMD capabilities is in their variable vector size" - que?!! :-) Reply
  • Matthias B V - Thursday, April 1, 2021 - link

    Glad to see. At least with the new arch they finally have to update their small cores. Was so tired of A55... Where only big cores are in focus though in my opinion the small ones are as or even more important.

    SVE 2 is great wonder how Intel and AMD react to this. They should work on similar features and also create a Lean86 getting rid of legacy if they want to defend market share. That and more flexible features like SVE would benefit them a lot.

    I am quite excited what ARM v9.x can do in tablets and Ultrabooks etc.
    Reply

Log in

Don't have an account? Sign up now