Using Deep Learning Attribution Methods for Fault Injection Attacks

BBlack Hat
253,000
1,046 views
25 likes
5 months ago
40:16

Description

This presentation explores how deep learning attribution methods, traditionally used in image processing, can reverse-engineer hardware security mechanisms for fault injection attacks. By analyzing power consumption traces of the Analog Devices DS28C36 secure authenticator, researchers identified critical timing windows to bypass multi-stage security checks using laser fault injection.

Hacking the Shield: Leveraging Deep Learning Attribution for Precision Fault Injection

Introduction

In the high-stakes world of hardware security, the 'Black Box' is the ultimate challenge. When a security researcher encounters a new secure authenticator or microcontroller, they are often flying blind, without schematics or source code. Traditionally, finding a vulnerability requires months of painstaking manual scanning—literally poking and prodding silicon with lasers and needles. However, a new frontier is emerging: using Artificial Intelligence not just to find software bugs, but to reverse-engineer physical hardware processes.

This post explores groundbreaking research into using Deep Learning (DL) attribution methods to automate the discovery of timing windows for Fault Injection (FI) attacks. By treating power consumption traces like images, we can use the same technology that identifies cats in photos to identify the exact nanosecond a chip checks its security fuses. This transition from manual trial-and-error to data-driven exploitation represents a significant shift in the hardware security landscape, making advanced attacks more accessible and repeatable.

Background & Context

Hardware attacks generally fall into two categories: Side-Channel Attacks (SCA), which involve 'listening' to the chip's emissions (power, EM, timing), and Fault Injection (FI), which involves 'perturbing' the chip (voltage glitches, laser pulses) to change its behavior. The most difficult part of FI is synchronization. You need to know exactly where on the chip to hit and exactly when to hit it.

In black-box scenarios, finding this 'when'—the timing offset—is a massive bottleneck. Secure elements often include countermeasures like desynchronization or double-checks (verifying a security bit twice). If an attacker doesn't know these checks are happening, their single-fault attempts will fail, leading them to believe the chip is secure. Deep learning attribution methods allow us to 'look inside' a model trained on these traces to see what the model sees, effectively turning a black box into a gray one.

Technical Deep Dive

Understanding Deep Learning Attribution

Attribution methods like Layer-wise Relevance Propagation (LRP) or Gradient-based analysis were designed to make AI 'explainable.' In image processing, if a model identifies a picture as a 'dog,' LRP can highlight the specific pixels (the ears, the snout) that led to that decision. In hardware security, we replace pixels with 'samples' from a power trace. By training a model to distinguish between a 'protected' memory read and an 'unprotected' memory read, we can use LRP to highlight the exact peaks in the power consumption signal where the security bit is being processed.

Step-by-Step Exploitation Implementation

  1. Data Acquisition: We collect approximately 50,000 power traces from the target (in this case, the Analog Devices DS28C36) while it is in an unprotected state, and another 50,000 while it is in a protected/locked state.
  2. Model Training: Using a Multi-Layer Perceptron (MLP) or Convolutional Neural Network (CNN), we train a classifier to identify the state based on the trace. Using tools like Scandal, this training can often be done on consumer-grade hardware in under 15 minutes.
  3. Attribution Analysis: We apply LRP to the model's decision. This produces a 'relevance' map over the power trace.
  4. Interpreting Results: In the DS28C36 experiment, the attribution map showed two distinct peaks of high relevance. This was the 'smoking gun'—it proved the chip was performing a double-check on the security fuses before allowing or denying memory access.
  5. Fault Injection: Knowing there are two checks, we set up a double laser fault injection. We target the logic area of the chip (identified via IR camera) and fire two pulses at the exact offsets identified by the DL model.

Tools and Techniques

  • Scandal: An open-source tool specifically designed for applying DL to side-channel and fault injection research.
  • Laser Bench: High-precision Alphanove laser systems for physical perturbation.
  • Oscilloscopes: Required for high-fidelity capture of power consumption (Icc) traces.

Mitigation & Defense

For hardware vendors, this research is a wake-up call. Simple double-checks are no longer sufficient to stop fault injection when AI can pinpoint their timing. To defend against these techniques, engineers should implement:

  • Power Blinding: Masking the power consumption of sensitive operations so that 'protected' and 'unprotected' traces look identical to a neural network.
  • Advanced Desynchronization: Introducing random delays (jitter) that are significant enough to break the alignment required for DL training.
  • Higher Order Redundancy: Moving beyond double-checking to more complex, non-linear verification paths.

Conclusion & Key Takeaways

The integration of Deep Learning attribution into hardware security testing is a game-changer. It reduces the 'vulnerability discovery' phase from months to days. The successful bypass of the DS28C36's EEPROM protection demonstrates that even well-regarded secure authenticators can be compromised when an attacker uses data science to refine their physical attacks. For researchers, the lesson is clear: learn to use tools like Scandal. For developers: assume your timing is visible, and build your hardware defenses accordingly. Always remember to conduct your research ethically and on hardware you own.

AI Summary

This research presentation by Karim Abdellatif from Ledger-Donjon demonstrates a novel approach to hardware security evaluation: using deep learning (DL) attribution methods to guide Fault Injection (FI) attacks. The primary challenge addressed is the 'black-box' nature of hardware hacking, where attackers often lack internal documentation and must spend months trial-and-error scanning a chip's surface and timing to find vulnerabilities. Abdellatif proposes using DL to automate the discovery of these 'vulnerable moments.' The methodology focuses on the Analog Devices DeepCover DS28C36, a secure authenticator used in hardware wallets to store private keys. The team first collected power consumption traces while executing the memory read command in two states: when the user memory slots were protected and when they were unprotected. A deep learning model (specifically MLP and CNN architectures) was then trained to classify these traces. To extract actionable intelligence from the model, they applied attribution methods like Layer-wise Relevance Propagation (LRP) and Gradient-based analysis. These methods identify which specific points in the power trace (the 'pixels' of the signal) were most influential in the model's decision-making process. The attribution analysis revealed two distinct timing zones where the chip checked security fuses, suggesting a double-checking countermeasure designed to thwart single fault injection. Armed with this timing data, the researchers performed a double laser fault injection. This precisely timed attack bypassed both security checks, allowing the team to successfully dump the contents of the protected EEPROM user slots. The presentation concludes by highlighting the tool 'Scandal,' an open-source framework developed for these analyses, and urges hardware vendors to implement more robust protections such as power blinding and advanced desynchronization to counter AI-driven hardware exploitation.

More from this Playlist

Behind Closed Doors - Bypassing RFID Readers
42:04
Travel & Eventsresearch-presentationhybridrfid
DriveThru Car Hacking: Fast Food, Faster Data Breach
36:35
Travel & Eventsresearch-presentationhybriddashcam
Impostor Syndrome - Hacking Apple MDMs Using Rogue Device Enrolments
34:53
Travel & Eventsresearch-presentationhybridapple
Dismantling the SEOS Protocol
26:50
Travel & Eventsresearch-presentationtechnical-deep-diverfid
The ByzRP Solution: A Global Operational Shield for RPKI Validators
47:04
Travel & Eventsresearch-presentationtechnical-deep-divebgp
Powered by Kuboid

We break your app
before they do.

Kuboid is a cybersecurity agency that finds hidden vulnerabilities before real attackers can exploit them. Proactive security testing, so you can ship with confidence.

Get in Touch

Trusted by the security community • Visit kuboid.in