Confidential Computing: Protecting Customer Data in the Cloud
This talk provides a technical overview of confidential computing, focusing on the use of Trusted Execution Environments (TEEs) to protect data in use. It explains the threat model for cloud environments, detailing how hardware-based isolation and remote attestation mitigate risks from compromised host operating systems or hypervisors. The presentation compares specific implementations, including AMD SEV-SNP, AWS Nitro Enclaves, and Intel TDX, highlighting their security guarantees and limitations. It also discusses the importance of open-source software and reproducible builds for establishing a verifiable chain of trust.
Beyond the Hypervisor: Why Confidential Computing is the New Frontier for Data Protection
TLDR: Confidential computing shifts the trust boundary from the cloud service provider to the hardware manufacturer by using Trusted Execution Environments (TEEs) to protect data in use. While this mitigates risks from compromised hypervisors or malicious cloud operators, it does not eliminate application-level vulnerabilities or side-channel attacks. Pentesters should focus on the attestation process and the integrity of the boot chain to identify potential bypasses in these hardened environments.
Cloud security has long been defined by the assumption that the hypervisor is a trusted component. We encrypt data at rest and in transit, but once that data hits the CPU for processing, it is effectively naked to anyone with root access on the host. If a cloud service provider’s administrator or a sophisticated attacker compromises the host operating system or the hypervisor, your memory is theirs. This is the fundamental problem that Confidential Computing aims to solve. By moving the trust boundary from the software stack to the silicon, we can finally treat the cloud provider as an untrusted entity for data processing.
The Mechanics of Hardware Isolation
At its core, confidential computing relies on a Trusted Execution Environment (TEE). Think of a TEE as a hardware-enforced sandbox that isolates code and data from the rest of the system. Even if the host kernel is fully compromised, it cannot inspect or modify the memory contents inside the TEE.
The real magic, however, is not just the isolation; it is the Remote Attestation. When you deploy a workload into a TEE, the hardware generates a cryptographically signed report. This report contains a measurement—a hash—of the code, the initial state of the environment, and the hardware configuration. A third party can verify this signature against the manufacturer’s root of trust to prove that the code running in the cloud is exactly what you intended to deploy. If the hypervisor tries to swap your binary for a malicious one, the measurement changes, the attestation fails, and the system refuses to provision the secrets required for the workload to function.
Implementation Realities: SEV-SNP vs. Nitro Enclaves
Different cloud providers have taken distinct approaches to implementing this hardware-based security. AMD’s SEV-SNP (Secure Encrypted Virtualization-Secure Nested Paging) is the current industry standard for confidential VMs. It encrypts the entire memory space of a virtual machine with a unique key that the hypervisor cannot access. The hardware enforces memory integrity, preventing the hypervisor from remapping pages or tampering with the guest’s memory.
AWS takes a different route with Nitro Enclaves. Instead of encrypting an entire VM, Nitro Enclaves allows you to carve out a dedicated, isolated compute environment within an EC2 instance. This environment has no persistent storage, no external network access, and no interactive access. It communicates with the parent instance over a local virtual socket. This is a much smaller attack surface, but it requires you to refactor your application to run within the enclave.
For those looking to experiment, Enclaver is a project that simplifies the process of packaging applications into Nitro Enclaves. It handles the heavy lifting of creating the enclave image file (EIF) and managing the attestation flow, which is often the most complex part of the implementation.
Pentesters and the New Threat Model
When you are auditing a system that uses confidential computing, the traditional "get root on the box" approach changes. You are no longer looking for ways to dump memory from the hypervisor, because the hardware will simply return encrypted garbage. Instead, your focus shifts to the attestation service and the boot chain.
If the application relies on a Key Broker Service to release secrets after a successful attestation, that service becomes your primary target. If you can trick the verifier into accepting a spoofed attestation report, you can gain access to the keys. Furthermore, side-channel attacks remain a significant concern. Even with memory encryption, an attacker with control over the host can observe cache access patterns or timing differences to infer data being processed inside the TEE.
Defensive Considerations
Defenders must recognize that confidential computing is not a silver bullet. It does not protect against application-level flaws like Injection or insecure API design. If your code running inside the TEE has a buffer overflow, the TEE will happily execute the exploit.
The most critical defensive step is ensuring that the guest VM firmware is verifiable. Many cloud providers use closed-source firmware, which creates a blind spot in your chain of trust. Where possible, use open-source firmware like OVMF and ensure that your build pipeline produces reproducible binaries. If you cannot verify the firmware, you are essentially trusting the cloud provider to not inject a backdoor before your code even starts executing.
Confidential computing is maturing rapidly, but it demands a shift in how we think about trust. It forces us to be explicit about what we are protecting and from whom. As these technologies become standard, the ability to audit the attestation flow and the integrity of the TEE will become a core competency for any serious security researcher. Stop assuming the hypervisor is your friend and start verifying the silicon.
Vulnerability Classes
Tools Used
Attack Techniques
All Tags
Up Next From This Conference
Similar Talks

Inside the FBI's Secret Encrypted Phone Company 'Anom'

Exploiting Shadow Data in AI Models and Embeddings




