Foreign Information Manipulation and Interference (Disinformation 2.0)

BBlack Hat
253,000
1,147 views
52 likes
6 months ago
30:16

Description

This presentation explores the evolution of Foreign Information Manipulation and Interference (FIMI), focusing on the hybrid warfare tactics of Russia and China. It details how state actors use generative AI, fake media outlets, and coordinated bot networks to destabilize democratic processes and influence global public opinion.

Disinformation 2.0: Unmasking the New Front in Hybrid Warfare

In the modern digital landscape, the battlefield has shifted from physical borders to the information domain. We are no longer just dealing with occasional 'fake news' or social media trolls; we are witnessing the rise of Disinformation 2.0. This era is defined by Foreign Information Manipulation and Interference (FIMI), a sophisticated, state-sponsored effort to destabilize societies, erode trust in institutions, and influence political outcomes. Understanding these tactics is no longer just a requirement for intelligence agencies—it is a critical skill for every cybersecurity professional and informed citizen.

Background & Context

Disinformation is often misunderstood as simply 'information one disagrees with.' However, as Frankie Sagerman (former Head of Digital Insights at NATO) explains, it is a deliberate, distorted information flow secretly injected into the communication process to manipulate and deceive. This is not a new phenomenon. During the 1980s, the KGB’s Operation Infection successfully spread the narrative that the AIDS virus was a US biological weapon.

What has changed in the 2020s is the scale, speed, and accessibility of the tools. Today, the World Economic Forum ranks disinformation as the #1 global threat for the next two years, surpassing even armed conflict and extreme weather. The integration of Generative AI, coordinated botnets, and 'hybrid warfare'—where digital narratives lead to physical sabotage—has created a complex threat environment that requires a new analytical framework.

Technical Deep Dive

The ABCD Model of Disinformation

To effectively analyze a FIMI campaign, analysts use the ABCD model (lately expanded to ABCDE). This framework allows researchers to pivot from looking at a single post to identifying a coordinated operation:

  1. Actors: Identifying who is behind the campaign. Is it a state actor like the Russian GRU or the Chinese MSS? Or is it a commercial 'disinformation-as-a-service' provider?
  2. Behavior: Looking for patterns of coordinated inauthentic behavior (CIB). This includes bot accounts posting the same content simultaneously or using automated bio generators to appear legitimate.
  3. Content: The actual narrative. Often, these stories contain a 'kernel of truth' wrapped in a massive lie. For example, a narrative might place a public figure in a real city (the truth) but falsely claim they made an expensive, scandalous purchase while there (the lie).
  4. Distribution: The infrastructure used, such as Pink Slime websites (fake local news) or Doppelganger domains (spoofed versions of theguardian.com or nato.int).
  5. Effect: Measuring the impact on the target audience, such as shifts in public opinion or policy changes (e.g., blocking military aid).

Poisoning the Well: LLM Infiltration

One of the most alarming developments is the successful poisoning of Large Language Models (LLMs). The Pravda network, a pro-Russian operation, published over 3.6 million articles across 150+ domains in 46 languages. The sheer volume of this data ensures that when Western AI systems like ChatGPT or Claude crawl the web for information, they ingest this disinformation. Consequently, the AI may provide biased or false answers to users, effectively 'whitewashing' the propaganda through a trusted AI interface.

The Doppelganger Tactic

The Doppelganger campaign uses sophisticated URL squatting. Attackers register domains like guardian.co.com instead of theguardian.com. They copy the entire site's CSS and layout. When a user visits, every link on the page (the header, the footer, the ads) points back to the legitimate site—except for the one fake article the attackers want to promote. These articles are then boosted with hundreds of thousands of dollars in social media advertising to ensure maximum reach.

From Digital to Physical: Hybrid Threats

Perhaps the most dangerous evolution is the transition to physical sabotage. Intelligence agencies have identified a trend where state actors recruit individuals through Telegram channels for 'micro-tasks.' Examples include:

  • Paying individuals in Bitcoin to place coffins under the Eiffel Tower.
  • Recruiting people to spray-paint anti-government graffiti.
  • Coordinating arson attacks on European warehouses or pharmaceutical plants.

Mitigation & Defense

Defending against FIMI requires a layered approach. First, organizations must adopt 'Narrative Intelligence'—monitoring not just for data breaches, but for shifts in sentiment and the emergence of coordinated lies.

Best Practices for Defenders:

  • Pre-bunking: Educating the public on how to spot specific disinformation tactics before they encounter them.
  • Domain Monitoring: Using OSINT tools to track the registration of look-alike domains and 'Pink Slime' news sites.
  • Platform Accountability: Pushing for better API access for researchers to analyze metadata on platforms like Meta, which currently restrict data access more than X (Twitter).
  • Fact-Checker Protection: Recognizing 'Operation Overload' tactics where your incident response team or fact-checkers are flooded with false reports to distract from a real attack.

Conclusion & Key Takeaways

The information domain is now a theatre of war. State actors like Russia and China are playing a 'long game,' planting narratives months or years before an election to slowly shift public perception. As the barriers to entry for creating high-quality deepfakes and mass-produced AI content drop, the responsibility falls on security professionals to develop more robust detection mechanisms. We must move beyond simple debunking and toward a structural defense of the information ecosystem. Be skeptical of high-emotion content, verify through multiple reputable sources, and remember: if a story seems too perfectly designed to make you angry, it probably was.

AI Summary

Frankie Sagerman, a former NATO digital insights lead, delivers a comprehensive analysis of modern disinformation, classified as Foreign Information Manipulation and Interference (FIMI). He begins by defining disinformation not just as 'fake news,' but as a deliberate, distorted information flow secretly leaked into communication processes to deceive and manipulate. He traces the roots of these operations back to the 1980s with the KGB's 'Operation Infection,' which falsely claimed the AIDS virus originated in the Pentagon, demonstrating that while tools have evolved, the underlying strategic intent remains consistent. The core of the presentation revolves around the ABCD model for analyzing disinformation campaigns, developed by James Pammond. This framework breaks down operations into Actors (state vs. commercial), Behavior (coordinated posting, fake personas), Content (narratives, manipulated media), Distribution (platforms, fake domains), and Effect (the difficult-to-measure impact on public perception). Sagerman highlights major contemporary campaigns such as 'Doppelganger,' where attackers clone reputable websites like The Guardian or NATO's official page to host fake articles, using URL squatting techniques (e.g., guardian.co.com). He also introduces 'Pink Slime' websites—fake local news outlets designed to 'whitewash' propaganda and make it appear as legitimate local reporting. A significant portion of the talk focuses on the role of Generative AI in scaling these attacks. Sagerman details how the 'Pravda' network successfully poisoned Large Language Models (LLMs) by publishing millions of fake articles across hundreds of domains. This massive volume of data eventually gets ingested by AI systems like ChatGPT, causing them to generate disinformation in response to user queries. He also discusses 'Operation Overload,' a tactic designed to paralyze fact-checking organizations by flooding them with hundreds of false leads and fake evidence, preventing them from addressing real threats. Finally, the speaker connects digital disinformation to 'Hybrid Warfare,' where online narratives transition into physical actions. He provides examples of Russia-linked recruitment on Telegram for physical sabotage, including vandalism in France and arson attacks across Europe. He concludes by emphasizing that countering FIMI requires a multifaceted approach: increasing public resilience (pre-bunking), improving platform accountability for ad revenue derived from disinformation, and establishing better international regulations for domain name registration.

More from this Playlist

Behind Closed Doors - Bypassing RFID Readers
42:04
Travel & Eventsresearch-presentationhybridrfid
DriveThru Car Hacking: Fast Food, Faster Data Breach
36:35
Travel & Eventsresearch-presentationhybriddashcam
Impostor Syndrome - Hacking Apple MDMs Using Rogue Device Enrolments
34:53
Travel & Eventsresearch-presentationhybridapple
Dismantling the SEOS Protocol
26:50
Travel & Eventsresearch-presentationtechnical-deep-diverfid
The ByzRP Solution: A Global Operational Shield for RPKI Validators
47:04
Travel & Eventsresearch-presentationtechnical-deep-divebgp
Powered by Kuboid

We break your app
before they do.

Kuboid is a cybersecurity agency that finds hidden vulnerabilities before real attackers can exploit them. Proactive security testing, so you can ship with confidence.

Get in Touch

Trusted by the security community • Visit kuboid.in