Foreign Information Manipulation and Interference (Disinformation 2.0)
This talk analyzes the methodology of Foreign Information Manipulation and Interference (FIMI) campaigns, focusing on the 'ABC(DE)' model used to track actors, behaviors, content, distribution, and effects. It demonstrates how state-sponsored actors utilize 'doppelganger' websites, 'pink slime' journalism, and bot networks to spread disinformation and manipulate public opinion. The presentation highlights the convergence of disinformation with cyber-attacks, such as GPS jamming and infrastructure sabotage, to destabilize target nations. The speaker emphasizes the need for improved awareness, platform accountability, and international cooperation to counter these hybrid threats.
The Mechanics of Modern Disinformation: How State Actors Weaponize Infrastructure
TLDR: Modern disinformation campaigns have evolved beyond simple social media posts into complex, multi-layered operations that blend digital infrastructure manipulation with physical-world sabotage. By utilizing the ABC(DE) model, state-sponsored actors deploy cloned "doppelganger" websites and bot networks to create a feedback loop that misleads both the public and automated fact-checking systems. Pentesters and researchers must recognize that these campaigns are not just content problems, but sophisticated infrastructure attacks that require a shift in how we monitor and verify digital trust.
Information warfare has moved out of the realm of abstract political theory and directly into the domain of infrastructure security. While most security professionals spend their time hunting for buffer overflows or misconfigured S3 buckets, state-sponsored actors are running campaigns that treat the entire internet as a programmable, exploitable target. The recent research presented at Black Hat 2024 on Foreign Information Manipulation and Interference (FIMI) makes one thing clear: the "disinformation" we see on our feeds is merely the payload of a much larger, automated delivery system.
The ABC(DE) Framework for Infrastructure Attacks
Understanding these campaigns requires looking at the technical framework used to execute them. The ABC(DE) model—Actors, Behavior, Content, Distribution, and Effect—is the playbook for these operations.
The "Actors" are not just random trolls; they are organized entities, often state-backed, that operate with the discipline of a red team. Their "Behavior" involves the systematic creation of fake personas and the co-opting of legitimate voices. The "Content" is where the deception happens, often using "pink slime" websites—low-quality, automated news sites that mimic local journalism to gain credibility.
The most critical technical component is the "Distribution" mechanism. Actors are not just posting to Twitter; they are building entire ecosystems of cloned websites. These "doppelganger" sites are pixel-perfect replicas of legitimate news outlets like The Guardian or NATO, designed to host malicious content while maintaining a veneer of authenticity. By using T1583 (Acquire Infrastructure) and T1584 (Compromise Infrastructure), these actors ensure that their fake stories are indexed by search engines and shared by automated bots, effectively poisoning the information supply chain.
The Feedback Loop of Automated Deception
One of the most dangerous aspects of these campaigns is the "Operation Overload" technique. Actors intentionally flood fact-checking organizations with thousands of fake reports, claims, and "evidence" files. This is a classic denial-of-service attack on the human and automated systems responsible for verifying truth.
When a fact-checker is busy debunking a fake story about a non-existent fire at a warehouse, they are not looking at the real, more subtle disinformation campaign happening in the background. This creates a window of opportunity for the actors to push their primary narrative. For those of us in the security industry, this is a familiar pattern: it is the digital equivalent of a distraction attack used to mask a more significant intrusion.
Convergence with Physical Sabotage
Perhaps the most alarming trend is the convergence of FIMI with traditional cyber-attacks. We are no longer just talking about fake news; we are seeing coordinated efforts to destabilize critical infrastructure. The research highlighted instances where disinformation campaigns were used to provide cover for, or amplify the impact of, physical and cyber-attacks on European railways and hospitals.
When an actor jams GPS signals in Estonia, causing planes to divert, they simultaneously push a narrative through their bot networks to explain the event in a way that serves their geopolitical goals. This is a hybrid threat. It is not enough to secure the network; we must also secure the context in which that network operates. If you are performing a red team engagement for a client in a sensitive sector, you should be asking: how would our client respond if their public-facing information was compromised in tandem with their internal systems?
Defensive Realities for the Security Professional
Defending against this requires more than just better spam filters. It requires a fundamental shift in how we view platform accountability. The OWASP Automated Threats to Web Applications project provides a good starting point for understanding how these bot networks operate, but we need to go further.
Defenders must prioritize:
- Infrastructure Monitoring: Treat domain registration patterns and hosting infrastructure as part of your threat intelligence feed. If a new domain is registered that mimics your corporate identity, it should trigger an immediate incident response process.
- Contextual Verification: Implement tools that can verify the provenance of digital content. As AI-generated audio and video become cheaper and more accessible, the ability to cryptographically sign or verify the source of information will become a critical security control.
- Cross-Domain Intelligence: Security teams must break down the silos between their threat intelligence units and their communications or public relations teams. A cyber-attack is rarely just a technical event; it is almost always part of a broader narrative.
The era of viewing disinformation as a "marketing problem" is over. It is a technical exploit that targets the most critical system of all: the human perception of reality. As researchers, we need to start treating these campaigns with the same technical rigor we apply to any other exploit chain. The next time you see a "trending" story that seems designed to provoke an emotional response, look at the infrastructure behind it. You might find that the real vulnerability isn't in the software, but in the way we verify the truth.
Tools Used
Target Technologies
Attack Techniques
All Tags
Up Next From This Conference
Similar Talks

Inside the FBI's Secret Encrypted Phone Company 'Anom'

Kill List: Hacking an Assassination Site on the Dark Web




