Foreign Information Manipulation and Interference: Disinformation 2.0
This talk analyzes the evolution of state-sponsored disinformation campaigns, focusing on the sophisticated use of cloned websites, bot networks, and AI-generated content to manipulate public perception. It details the 'ABC(DE)' model—Actors, Behavior, Content, Distribution, and Effect—used to map and identify coordinated inauthentic behavior across global platforms. The presentation highlights how these campaigns infiltrate AI training data and leverage automated tools to overwhelm fact-checkers, ultimately aiming to destabilize democratic processes through hybrid warfare tactics.
Beyond Phishing: How State Actors Poison AI Models with Synthetic Disinformation
TLDR: Modern state-sponsored disinformation campaigns have evolved from simple social engineering into complex, automated operations that poison the training data of LLMs and overwhelm human fact-checkers. By deploying massive networks of cloned news sites and AI-generated content, these actors manipulate the information ecosystem to influence geopolitical outcomes. Pentesters and researchers must recognize that the threat is no longer just about account compromise, but about the integrity of the data that powers our automated decision-making systems.
Traditional red teaming often focuses on the technical stack—the web application, the cloud infrastructure, or the endpoint. We look for SQL injection, broken access control, or misconfigured S3 buckets. However, the research presented at Black Hat 2025 on Foreign Information Manipulation and Interference (FIMI) forces us to expand our threat model. We are no longer just defending against unauthorized access to a database; we are defending against the systematic corruption of the information that users and AI models rely on to perceive reality.
The Mechanics of Information Poisoning
The FIMI campaigns described in this research utilize a sophisticated, multi-layered approach that mirrors the kill chain we see in cyberattacks. The researchers identified a framework they call the ABC(DE) model: Actors, Behavior, Content, Distribution, and Effect.
The "Behavior" component is where the most significant shift has occurred. Instead of relying on manual content creation, state actors are now using AI to generate thousands of variations of a single narrative. This content is then pushed through "Pink Slime" websites—domains that masquerade as legitimate local news outlets. These sites are designed to look professional, often mimicking the branding of established media organizations. By flooding the ecosystem with this synthetic content, these actors ensure that when an AI chatbot or a search engine crawls the web for information, it ingests their fabricated narratives as ground truth.
This is essentially a data poisoning attack on a global scale. When an LLM retrieves information from these compromised sources, it incorporates that bias into its responses. For a pentester, this means the attack surface now includes the training data and the retrieval-augmented generation (RAG) pipelines that organizations are rapidly deploying.
Mapping the Network
Detecting these campaigns requires a shift toward network analysis and behavioral monitoring. The research highlighted how bot networks are used to amplify specific narratives, moving them from fringe, state-aligned accounts into the mainstream.
A critical technical point here is the use of automated infrastructure. Actors are not just creating one or two accounts; they are using T1583-acquire-infrastructure and T1585-establish-accounts to build vast, interconnected webs of fake personas. By analyzing the graph of these interactions, researchers can identify the "key nodes"—the accounts that serve as the primary amplifiers for the disinformation.
If you are performing a red team engagement, consider how you might simulate this. You don't need to hack a server to cause damage; you can manipulate the perception of a target by controlling the information they consume. This is the essence of hybrid warfare. The goal is to force the target to make decisions based on a false reality.
The Overload Strategy
One of the most effective tactics identified is "Operation Overload." Fact-checkers are a finite resource. By bombarding these organizations with thousands of fake reports, fabricated screenshots, and synthetic evidence, state actors effectively perform a Denial of Service (DoS) attack on the truth.
When a fact-checker is busy debunking a fake story about a fire at a warehouse, they aren't looking at the more subtle, long-term narrative shifts being pushed elsewhere. This is a classic distraction technique. For those of us in the security industry, this is a reminder that our defensive tools—whether they are SIEMs or human analysts—can be overwhelmed by volume. We need to prioritize automated detection of coordinated inauthentic behavior, using tools like Blackbird.AI or NewsGuard to identify patterns that human eyes will inevitably miss.
Defensive Realities
Defending against FIMI is not a problem that can be solved with a firewall. It requires a fundamental change in how we verify information. Organizations must implement stricter validation for the data sources feeding their AI models. If your RAG pipeline is pulling from the open web, you are inherently trusting that data. You need to implement source-reputation scoring and cross-reference information against trusted, verified APIs.
Furthermore, we need to treat the integrity of our information supply chain with the same rigor we apply to our software supply chain. Just as we scan dependencies for vulnerabilities, we must scan the information sources our systems rely on for signs of manipulation.
The era of simple, isolated disinformation is over. We are now facing a coordinated, AI-driven effort to destabilize the very systems we use to communicate and make decisions. As researchers, our responsibility is to look beyond the code and understand the broader context of the information we interact with daily. The next time you see a "breaking" story on social media, ask yourself: who benefits from this being true, and what infrastructure was used to make it appear that way? The answer might just be the next vulnerability you need to report.
Vulnerability Classes
Tools Used
Target Technologies
Attack Techniques
All Tags
Up Next From This Conference
Similar Talks

Inside the FBI's Secret Encrypted Phone Company 'Anom'

Kill List: Hacking an Assassination Site on the Dark Web




