Kuboid
Open Luck·Kuboid.in

GenAI Red Teaming for Payment Fraud

DEFCONConference599 views46:176 months ago

This talk demonstrates how generative AI models can be used to automate the creation of fraudulent documents, such as utility bills and identity cards, to bypass automated customer verification systems. The speakers show how these AI-generated artifacts can deceive existing fraud detection controls, including document processing and liveness checks. The presentation highlights the vulnerability of current authentication workflows to AI-driven social engineering and emphasizes the need for more robust, multi-layered fraud prevention strategies. A live demonstration showcases the use of publicly available AI tools to perform these attacks.

Bypassing Automated Identity Verification with Generative AI

TLDR: Modern automated identity verification systems are increasingly vulnerable to generative AI, which can now produce convincing, spoofed documents and real-time video deepfakes with minimal effort. Researchers demonstrated that off-the-shelf tools can bypass standard liveness checks and document processing, turning the human element of these systems into the primary attack vector. Security teams must move beyond static document analysis and implement more robust, multi-layered fraud prevention strategies to counter these evolving threats.

Automated identity verification has become the standard for onboarding in the financial sector, but the assumption that these systems are inherently secure is rapidly collapsing. As banks and payment processors shift away from in-person verification, they have become reliant on digital document processing and liveness checks. While these systems are designed to detect fraud, they are currently being outpaced by the accessibility and capability of generative AI. The barrier to entry for creating high-quality, fraudulent artifacts has dropped to near zero, and the impact is already being felt in the form of increased fraud volumes and sophisticated social engineering campaigns.

The Mechanics of AI-Driven Identity Spoofing

The core of this vulnerability lies in the fact that many automated verification systems rely on static data points that are now easily spoofed. During the research, it was demonstrated that tools like ChatGPT and Qwen Image can generate realistic-looking utility bills and identity documents from simple text prompts. These are not just crude fakes; they are high-resolution, contextually accurate documents that can pass initial automated scans.

The attack flow is straightforward. An attacker generates a fake document, such as a utility bill, and then uses a tool like Deep-Live-Cam to perform real-time face-swapping. This allows an attacker to map their own facial features onto a target identity or a synthetic persona in real-time. When combined with ElevenLabs for voice synthesis, the attacker can effectively impersonate a legitimate customer during a video verification call or liveness check.

The technical reality is that these systems are often trained on datasets that do not account for the high-fidelity, dynamic nature of AI-generated content. When a system is presented with an AI-generated image, it may flag it as "real" because the image contains the expected structural elements of a valid document. The following pseudo-code illustrates the simplicity of how a rule-based system might be bypassed by manipulating the input data:

def validate_transaction(transaction_data):
    # Standard rule: Check if spend exceeds limit
    if transaction_data['amount'] > 5000:
        return "flag_for_review"
    
    # Vulnerability: Rule relies on static, spoofable data
    if verify_document(transaction_data['document_image']):
        return "approve"
    
    return "deny"

Real-World Applicability for Pentesters

For those conducting red team engagements or bug bounty research, this is a critical area to investigate. When testing a client's onboarding flow, do not just look for traditional web vulnerabilities like XSS or IDOR. Instead, focus on the identity verification pipeline. Can you submit a synthetic document? Can you manipulate the liveness check by using a virtual camera feed?

The impact of a successful bypass is significant. It allows for account takeover, synthetic identity fraud, and the creation of mule accounts that can be used to launder money. In a recent engagement, it was observed that even basic, off-the-shelf detection models—such as those based on Capsule Forensics—struggled to differentiate between genuine and AI-generated images once they were retrained on a small set of synthetic data. This highlights the "cat and mouse" nature of AI security; as detection models improve, so do the generative models used to bypass them.

The Defensive Challenge

Defending against these attacks requires a fundamental shift in how identity is verified. Relying on a single, static document or a simple video liveness check is no longer sufficient. Organizations must implement multi-layered authentication that incorporates behavioral biometrics, device fingerprinting, and cross-referencing against multiple, independent data sources.

Furthermore, the human element remains the softest target. Attackers are increasingly targeting the contact center staff who are responsible for manual reviews of flagged transactions. By using deepfakes to bypass automated systems, attackers can create a "false sense of security" where the human reviewer is more likely to approve a transaction because the automated system has already performed a preliminary check.

Moving Forward

The current state of automated identity verification is a race between generative capabilities and detection logic. As OWASP has long highlighted, identification and authentication failures are a top-tier risk, and generative AI has only amplified this. If your organization or client is relying on automated verification, it is time to conduct a thorough audit of the entire onboarding pipeline. Test the system with the same tools that an attacker would use. If you can bypass the liveness check with a simple face-swap, you can be certain that a motivated adversary already has.

Talk Type
exploit demo
Difficulty
intermediate
Has Demo Has Code Tool Released


DC33 Payment Village Talks

5 talks · 2025
Browse conference →
Premium Security Audit

We break your app before they do.

Professional penetration testing and vulnerability assessments by the Kuboid Secure Layer team. Securing your infrastructure at every layer.

Get in Touch
Official Security Partner
kuboid.in