Intro to Privacy-Enhancing Technologies (PETs)
This talk provides an overview of Privacy-Enhancing Technologies (PETs) designed to enable secure, collaborative computation among distrustful parties. It demonstrates how techniques like Multi-Party Computation (MPC) and Fully Homomorphic Encryption (FHE) allow for data analysis without exposing raw, sensitive information. The presentation highlights practical applications, such as secure elections, private data lookups, and secure model inference, emphasizing the trade-offs between computational overhead and privacy. The speaker also discusses the role of threshold signing in mitigating single points of failure in custodial wallet systems.
Beyond Plaintext: Why Privacy-Enhancing Technologies Are Your Next Target
TLDR: Privacy-Enhancing Technologies (PETs) like Multi-Party Computation (MPC) and Fully Homomorphic Encryption (FHE) are moving from academic theory to production environments in custodial wallets and machine learning pipelines. While these tools promise to secure data in transit and at rest, they introduce new, complex attack surfaces that researchers must understand. Pentesters should focus on the implementation flaws in these cryptographic protocols rather than assuming the underlying math is the only point of failure.
Security researchers often treat cryptographic implementations as black boxes. We assume that if a system uses advanced primitives like FHE or MPC, the data is inherently safe. That assumption is becoming a liability. As companies rush to adopt PETs to comply with data privacy regulations or to secure sensitive machine learning inference, they are deploying custom, often brittle, implementations of these protocols. If you are testing a system that claims to perform "secure computation," you are no longer just looking for standard web vulnerabilities. You are looking for logic flaws in how these parties exchange shares or how they handle encrypted inputs.
The Mechanics of Secure Computation
At the core of these technologies is the goal of computing on data without ever decrypting it. In a standard web application, you might send a plaintext value to a server, which then processes it. With FHE, you send an encrypted value, the server performs operations on that ciphertext, and returns an encrypted result that only the client can decrypt.
The technical hurdle here is computational overhead. As demonstrated in recent research, while addition on encrypted data is relatively cheap, multiplication is computationally expensive. This is why you will often see developers attempting to optimize these circuits by breaking down complex operations into simpler, "cheaper" arithmetic. This is exactly where the vulnerability lies. When a developer tries to optimize a circuit, they often introduce side-channel leaks or logic errors that allow an attacker to infer the underlying secret data by observing the timing or the structure of the encrypted operations.
Where PETs Break in the Real World
Custodial wallets are the most immediate battleground for these technologies. Many providers now use threshold signing to prevent a single compromised server from draining a user's funds. Instead of one server holding a private key, the key is split into shares using MPC. To sign a transaction, the client and the server must perform a multi-party protocol to generate a signature without ever reconstructing the full key.
The vulnerability in these systems is rarely the math behind the threshold scheme. It is the implementation of the communication protocol between the parties. If you are auditing a custodial wallet, look for:
- Protocol State Machine Flaws: Can you force the server to participate in a signing session with an invalid or malicious message?
- Insufficient Thresholds: Does the implementation allow a single party to initiate a signature without the required quorum?
- Replay Attacks: Can you replay a partial signature share from a previous session to influence the current one?
For those interested in the underlying standards, the OWASP Cryptographic Storage Cheat Sheet provides a baseline, but it does not cover the nuances of MPC. You need to look at the specific documentation for the libraries being used, such as MP-SPDZ, which is a common framework for multi-party computation.
Testing for Logic Flaws in Encrypted Pipelines
When testing machine learning models that use secure inference, the target is the interaction between the client and the model provider. If the provider uses FHE to protect their model weights, they are essentially trying to prevent you from stealing their IP. If they use it to protect your input, they are trying to prevent you from seeing the model's internal state.
During an engagement, treat the encrypted input as a fuzzing target. If you can control the input, try to send malformed ciphertexts. Does the server crash? Does it return an error that reveals information about the underlying plaintext? A common mistake is failing to properly validate the range of the decrypted input after the computation is complete. If the model expects a normalized input between 0 and 1, but the FHE circuit allows for arbitrary values, you might be able to trigger unexpected behavior in the model's decision-making logic.
The Defensive Reality
Defending these systems requires a shift in mindset. You cannot rely on traditional perimeter security when the data itself is designed to be processed in an untrusted environment. The most effective defense is to ensure that the protocol implementation is formally verified and that the communication channels are strictly authenticated. If you are working with a blue team, push them to implement strict rate-limiting on the computation endpoints. Even if the data is encrypted, the computational cost of these operations is high. An attacker can easily perform a Denial of Wallet (DoW) attack by flooding the server with requests that force it to perform expensive FHE multiplications.
Privacy-enhancing technologies are not a silver bullet. They are a new layer of complexity that requires the same rigorous scrutiny we apply to any other part of the stack. Do not let the "cryptography" label scare you away from finding the bugs. The math might be sound, but the code that runs it is written by humans, and humans make mistakes. Keep digging into the protocols, keep fuzzing the inputs, and keep questioning the assumptions of the developers who think their encrypted data is invisible to you.
Target Technologies
All Tags
Up Next From This Conference
Similar Talks

Inside the FBI's Secret Encrypted Phone Company 'Anom'

Surveilling the Masses with Wi-Fi Positioning Systems




