Kuboid
Open Luck·Kuboid.in

Social Engineering A.I. and Subverting H.I.

DEFCONConference64,360 views46:49over 1 year ago

This talk demonstrates the use of Large Language Models (LLMs) to facilitate social engineering attacks, specifically vishing and phishing, by generating convincing lures and scripts. It highlights how attackers can leverage AI to bypass traditional security awareness training and manipulate employees into performing unauthorized actions. The presentation emphasizes the need for situational awareness training over static security awareness programs to defend against AI-augmented social engineering. The speaker also showcases the use of AI to generate malicious payloads for hardware-based attack tools like the Hak5 Rubber Ducky.

Weaponizing LLMs for Social Engineering and Payload Generation

TLDR: This research demonstrates how Large Language Models (LLMs) can be weaponized to automate sophisticated social engineering campaigns and generate malicious payloads for hardware tools like the Hak5 Rubber Ducky. By bypassing traditional security awareness training through hyper-personalized lures, attackers can manipulate employees into executing unauthorized actions. Pentesters should shift their focus from static awareness training to situational awareness exercises that mirror real-world AI-augmented threats.

Social engineering has always been the path of least resistance, but the barrier to entry just dropped significantly. We have spent years training employees to spot the classic "Nigerian Prince" email or the poorly translated phishing link. Those days are effectively over. The integration of LLMs into the attacker toolkit allows for the rapid generation of highly convincing, context-aware lures that can bypass the filters of even the most skeptical users.

The Shift from Static Phishing to AI-Augmented Vishing

The core of this research centers on the ability to use LLMs to automate the reconnaissance and engagement phases of an attack. Instead of manually crafting a pretext, an attacker can feed an LLM specific details about a target organization, its internal jargon, and its communication style. The result is a series of scripts for vishing or phishing that sound indistinguishable from legitimate internal communications.

During the demonstration, the researcher highlighted how these models can be coerced into providing actionable intelligence. While many platforms have guardrails to prevent the generation of malicious content, these can often be circumvented through clever prompt engineering. By framing the request within an educational or research context, an attacker can extract specific social engineering scripts or even technical payloads.

Automating Payload Generation for Hardware Attacks

One of the most practical takeaways for a pentester is the use of LLMs to generate payloads for hardware-based attack tools. The Hak5 Rubber Ducky is a staple in any red team engagement, but writing Ducky Script can be tedious. The research shows that you can simply ask an LLM to generate a script for a specific task, such as exfiltrating files from a user's documents folder into a compressed archive.

For example, a prompt like this can yield a functional, albeit basic, payload:

DELAY 500
GUI r
DELAY 500
STRING powershell -WindowStyle Hidden -Command "Compress-Archive -Path $env:USERPROFILE\Documents\* -DestinationPath C:\temp\backup.zip"
ENTER

While the model might include a disclaimer about responsible use, the code it provides is functional. This turns a time-consuming manual task into a matter of seconds. For a pentester, this means you can spend less time writing boilerplate code and more time focusing on the logic of your engagement.

Real-World Applicability and the Insider Threat

Where does this hit the hardest? It is in the realm of the "non-malicious insider." Most employees want to do their jobs well. They are helpful, they are busy, and they are conditioned to follow instructions from perceived authority figures. When an attacker uses an AI-generated voice or a perfectly crafted email to impersonate an executive or an IT administrator, the employee is often more concerned with being helpful than with verifying the request.

This is where OWASP A01:2021-Broken Access Control becomes relevant. If an attacker can manipulate an employee into running a script or providing credentials, they have effectively bypassed the most expensive security controls in the stack. The impact is not just a single compromised machine, but potentially full access to cloud environments like Microsoft 365, where Copilot or other AI integrations can be further abused to exfiltrate sensitive data.

Moving Beyond Security Awareness Training

Defenders need to stop relying on annual, slide-based security awareness training. It is ineffective against an attacker who can generate a unique, personalized phishing lure for every single employee in your company. Instead, organizations must implement situational awareness training. This means running exercises that force employees to think critically about the context of a request.

If an employee receives a request to perform an unusual action, they should have a clear, non-punitive channel to verify that request. The goal is to create "teachable moments" where employees see the attack in action. When they understand what a real-world attack looks like—not just a generic phishing template—they are far more likely to flag it.

Ultimately, the rise of AI in social engineering is not a reason to panic, but it is a reason to evolve. We are in an arms race where the tools are becoming more accessible to everyone. As researchers and pentesters, our job is to stay ahead of the curve by understanding how these models can be subverted and by building defenses that account for the human element. Stop treating your employees like a liability and start treating them like the final, most critical layer of your security architecture. If you are not testing your team against these AI-augmented scenarios, you are already behind.

Premium Security Audit

We break your app before they do.

Professional penetration testing and vulnerability assessments by the Kuboid Secure Layer team. Securing your infrastructure at every layer.

Get in Touch
Official Security Partner
kuboid.in