Thinking Like a Hacker in the Age of AI
This talk explores the intersection of human cognitive processes and artificial intelligence, focusing on how hackers can leverage AI to enhance their problem-solving and reconnaissance capabilities. It discusses the necessity of adopting a 'hacker mindset' to critically evaluate AI outputs, identify potential vulnerabilities in AI-driven systems, and maintain human oversight in automated workflows. The speaker emphasizes the importance of cross-disciplinary learning and cognitive defense against AI-generated misinformation and manipulation.
Why Your AI-Driven Reconnaissance Is Already Outdated
TLDR: Modern AI models are not just tools for automation; they are active participants in the reconnaissance phase that require a fundamental shift in how researchers approach target discovery. By treating AI as a collaborative partner rather than a static search engine, attackers can uncover non-obvious attack vectors that traditional scanners miss. This post breaks down how to move beyond basic prompt engineering to build a cognitive defense against AI-generated misinformation and manipulation.
The industry has spent the last year obsessed with how to break Large Language Models (LLMs) via prompt injection or data poisoning. While those are valid research areas, they miss the bigger picture. The real threat is not just the model itself, but how we are integrating these systems into our own workflows. If you are still using AI as a glorified autocomplete for your bash scripts, you are missing the point. The most effective researchers are now using AI to perform high-level reconnaissance, identifying patterns in target infrastructure that are invisible to standard tools like nmap or ffuf.
The Shift to Cognitive Reconnaissance
Traditional reconnaissance relies on deterministic tools. You feed a target, it returns a list of open ports, subdomains, or endpoints. That is the "noise" I talk about constantly. The signal is in the relationships between those assets. When you feed a massive dump of unstructured data—server logs, public code repositories, and leaked configuration files—into an LLM, you aren't just indexing it. You are asking the model to perform a synthesis that a human would take weeks to complete.
This is where the "hacker mindset" becomes a technical requirement. You have to treat the AI as an unreliable, albeit brilliant, intern. If you ask an LLM to "find vulnerabilities in this code," it will hallucinate. If you ask it to "map the logical flow of this authentication process and identify where the state transition fails," you get a roadmap for a manual exploit. The difference is in the precision of the context you provide.
Moving Beyond Basic Prompting
The most effective way to use these models for reconnaissance is to adopt a structured, iterative approach. Instead of a single, massive prompt, break your target analysis into discrete, logical steps. For example, when analyzing a target's API documentation, don't just ask for endpoints. Use a chain-of-thought approach to force the model to reason through the authorization logic.
# Example of a structured reconnaissance workflow
# 1. Extract endpoints from Swagger/OpenAPI spec
# 2. Feed to LLM with specific context:
# "Analyze these endpoints for potential IDOR vectors.
# Focus on parameters that reference user IDs or resource ownership."
This technique, often referred to as Prompt Engineering for Security, is not about magic words. It is about providing the model with the same constraints you would give a junior pentester. You are defining the scope, the objective, and the expected output format. When you do this, the AI stops being a search engine and starts being a force multiplier.
The Danger of AI-Generated Misinformation
We are seeing a rise in AI-driven social engineering that goes far beyond simple phishing emails. Attackers are now using AI to generate highly specific, context-aware lures that mimic internal communications with terrifying accuracy. As researchers, we need to be aware that our own reconnaissance data might be tainted. If you are pulling data from public sources that have been scraped and re-indexed by AI, you might be looking at a "hallucinated" infrastructure.
Always verify your findings. If the AI suggests a specific misconfiguration in a cloud environment, treat it as a hypothesis, not a fact. Use official cloud provider security advisories to cross-reference the AI's claims. Never run a payload suggested by an LLM without fully understanding the underlying mechanics. If you don't know how the exploit works, you shouldn't be running it.
Why You Need to Think Like a Machine
The most critical skill for a researcher in the age of AI is the ability to think at the same level of abstraction as the system you are testing. If you are testing an AI-driven application, you need to understand how it processes tokens, how it manages context windows, and where it stores its training data. This is not just about CVE-2023-29357 or similar vulnerabilities; it is about the fundamental architecture of the system.
When you are on an engagement, look for the "hidden" logic. Where is the AI making decisions? Is it using a hard-coded system prompt? Is it pulling data from an external vector database? These are the new "ports" and "services" of the AI era. If you can identify these components, you can start to map out the attack surface.
What to Do Next
Stop treating AI as a black box. Start treating it as a component of the target environment. If you are not already, start building your own local, private instances of models to experiment with. Use Ollama to run models locally so you can test your prompts without leaking sensitive target data to third-party APIs.
The goal is not to let the AI do the work for you. The goal is to use the AI to clear away the noise so you can focus on the high-value, manual exploitation that actually matters. The researchers who will succeed in the next five years are the ones who can bridge the gap between human intuition and machine-scale analysis. Start building that bridge today.
Target Technologies
Attack Techniques
Up Next From This Conference

DisguiseDelimit: Exploiting Synology NAS with Delimiters and Novel Tricks

Browser Extension Clickjacking: One Click and Your Credit Card Is Stolen

Can't Stop the ROP: Automating Universal ASLR Bypasses for Windows
Similar Talks

Kill List: Hacking an Assassination Site on the Dark Web

Counter Deception: Defending Yourself in a World Full of Lies

