The Living Dead: Hacking Mobile Face Recognition SDKs with Non-Deepfake Attacks
This talk demonstrates how to bypass mobile face recognition SDKs by exploiting insecure system architectures and implementation flaws in the liveness detection and result-passing protocols. The researchers show that by hooking into the SDK's logic, an attacker can manipulate liveness check results or replace captured images with a victim's photo to achieve unauthorized authentication. The presentation highlights that many popular mobile applications, particularly in the financial sector, are vulnerable due to improper handling of sensitive verification data and lack of secure communication between the client and the backend. The researchers provide a methodology for reverse-engineering these SDKs to identify and exploit these weaknesses without requiring complex deepfake generation.
Bypassing Mobile Face Recognition: Why Your Liveness Checks Are Just Security Theater
TLDR: Mobile face recognition SDKs often rely on insecure client-side logic that can be easily bypassed by hooking into the application process. By manipulating the liveness detection flow or forging the final verification result, an attacker can authenticate as any user without needing a deepfake. Developers must move away from client-side trust and implement server-side hardware attestation to prevent these trivial identity spoofing attacks.
Face recognition has become the gold standard for "Know Your Customer" (KYC) processes in mobile banking and ride-sharing apps. It feels secure because it feels futuristic. But beneath the hood, many of these implementations are built on a foundation of sand. The recent research presented at Black Hat 2023 on hacking mobile face recognition SDKs proves that you do not need a high-end GPU or a sophisticated deepfake model to bypass these systems. You just need a rooted device, a copy of Frida, and a basic understanding of how these SDKs pass data back to the server.
The Architecture of Failure
Most mobile face recognition SDKs follow a predictable, three-step workflow: detect the face, perform a liveness check, and then send the result to the backend for final matching. The vulnerability lies in the fact that many developers treat the client-side SDK as a trusted authority.
In a typical "Local-Cloud Mixed" architecture, the SDK performs the liveness detection on the device and then sends a result—often a simple boolean or a confidence score—to the application code. If the application code is responsible for forwarding this result to the backend, the entire security model collapses. An attacker with root access can hook the SDK’s methods to force a "success" return value, regardless of what the camera actually sees.
Hooking the Logic
The research highlights that you do not need to reverse-engineer the complex machine learning models themselves. Instead, you target the integration points. Using Frida, you can enumerate the classes and methods within the SDK to find the liveness check function.
Once you identify the method responsible for returning the liveness status, you can intercept it. For example, if the SDK uses a method like isLivenessPassed(), you can write a simple script to force it to return true every time it is called.
Java.perform(function () {
var targetClass = Java.use("com.example.sdk.LivenessManager");
targetClass.isLivenessPassed.implementation = function () {
console.log("Forcing liveness check to pass...");
return true;
};
});
This bypasses the need for the user to blink, nod, or turn their head. The SDK effectively reports that the liveness check was successful, and the app proceeds to the next stage of the authentication flow.
The Result-Passing Pitfall
Even if the liveness check is performed securely, the way the result is passed to the backend is often flawed. Some SDKs return a plaintext result, which is trivial to intercept and modify. Others attempt to encrypt the result but fail to bind it to the specific session or user, leading to a "malleability attack."
In this scenario, an attacker can capture a valid "success" response from a legitimate session and replay it in their own malicious session. If the backend does not verify that the cryptographic proof is tied to the current transaction, it will accept the replayed result as valid. This is a classic violation of OWASP A02:2021 – Cryptographic Failures, where the lack of proper integrity checks allows an attacker to manipulate the authentication state.
Testing for These Flaws
When you are on an engagement, stop treating the face recognition flow as a black box. Start by checking if the application is using a known SDK. You can often find clues in the AndroidManifest.xml or by inspecting the assets folder for model files like .dat or .tflite.
Once you have the SDK, use dexhunter or similar tools to unpack the application if it is protected by a commercial packer. Once you have the code, look for the methods that handle the liveness result. If you see logic that performs a check and then returns a boolean to the main application thread, you have found your target.
The impact of this vulnerability is critical. It allows for account takeover, fraudulent account creation, and identity theft on a massive scale. If an attacker can automate this process, they can create thousands of fake accounts in minutes, which is exactly what we have seen in recent tax fraud cases.
Defending the Perimeter
The only way to fix this is to stop trusting the client. The liveness detection result should never be a simple boolean passed through the application code. Instead, the SDK should generate a cryptographically signed proof of the liveness check that is sent directly from the SDK to the backend.
Furthermore, the backend must perform its own verification, ideally using hardware-backed attestation to ensure that the request is coming from a genuine, untampered device. If your client is relying on a third-party SDK, push them to audit the result-passing protocol. If the SDK returns a result that the app can modify before sending it to the server, the implementation is fundamentally broken.
Security in the age of AI is not about the complexity of the model. It is about the integrity of the system that wraps it. If you can hook the process, you own the result. Stop looking for deepfakes and start looking at the function calls.
Vulnerability Classes
Tools Used
Target Technologies
Attack Techniques
All Tags
Up Next From This Conference

Chained to Hit: Discovering New Vectors to Gain Remote and Root Access in SAP Enterprise Software

Zero-Touch-Pwn: Abusing Zoom's Zero Touch Provisioning for Remote Attacks on Desk Phones

ODDFuzz: Hunting Java Deserialization Gadget Chains via Structure-Aware Directed Greybox Fuzzing
Similar Talks

Inside the FBI's Secret Encrypted Phone Company 'Anom'

Hacking Apple's USB-C Port Controller

