Kuboid
Open Luck·Kuboid.in

Counter Deception: Defending Yourself in a World Full of Lies

DEFCONConference268,395 views42:54over 1 year ago

This talk explores the principles of deception and counter-deception, drawing on military doctrine to analyze how these techniques are applied in modern information security. It examines how human cognitive biases, such as confirmation bias and selection bias, are exploited by attackers to manipulate perceptions and influence decision-making. The presentation provides a framework for counter-deception, emphasizing the need for critical analysis, information triangulation, and the development of tools to detect and mitigate deceptive narratives. The speakers highlight the importance of maintaining mental discipline and utilizing diverse, credible data sources to defend against sophisticated disinformation campaigns.

How Cognitive Biases and Information Manipulation Undermine Security Operations

TLDR: Modern security operations are increasingly vulnerable to sophisticated deception campaigns that exploit human cognitive biases rather than just technical flaws. By understanding how attackers manipulate the information we consume, researchers can apply counter-deception principles like information triangulation to verify data integrity. This post breaks down how to apply these military-grade concepts to detect manipulated narratives in threat intelligence and open-source research.

Security professionals spend their careers hunting for technical vulnerabilities, but we often ignore the most effective attack vector: the human mind. Attackers have realized that if they can control the information you consume, they don't need to burn a zero-day to compromise your network. They can simply lead you to the wrong conclusion, causing you to misallocate resources or ignore a genuine threat. This is not a theoretical concern. We see it in the wild when threat actors use deceptive patterns to influence how security analysts attribute malware or interpret incident data.

The Mechanics of Cognitive Deception

At its core, deception is the act of hiding the truth to gain an advantage. In an information security context, this means influencing a target to make an incorrect decision or take an action that benefits the attacker. The most effective deceptions are grounded in human nature. We are wired to seek validation, and attackers exploit this through confirmation bias. If an analyst already suspects a specific threat actor is behind a campaign, they are significantly more likely to accept "evidence" that supports that theory, even if that evidence was planted.

The DIKW hierarchy (Data, Information, Knowledge, Wisdom) provides a useful model for understanding how this works. Attackers poison the data layer. By injecting false information into the sources we trust, they corrupt our knowledge and, ultimately, our wisdom. When you are performing threat intelligence, you are rarely looking at raw, unadulterated data. You are looking at a curated stream of information. If that stream is compromised, your analysis is compromised.

Applying Counter-Deception Principles

Counter-deception is a professional discipline, not just a mindset. It requires the same rigor we apply to binary exploitation or network penetration testing. The first step is to recognize that your intuition is a liability. When you feel a strong sense of certainty about an attribution or a threat vector, that is exactly when you should pause and apply critical analysis.

One of the most effective techniques is information triangulation. Never rely on a single source for a high-stakes conclusion. If a report claims a specific malware sample is linked to a known APT, verify that claim against independent, diverse data sources. If you cannot find corroborating evidence, treat the initial report as suspect.

We can also look to tools that help us audit the provenance of information. Projects like WikiScanner demonstrated years ago that we can track the origin of edits to public information, revealing when large organizations are scrubbing their own history. While that specific tool is a relic, the principle remains vital. We need to build or utilize tools that provide transparency into the lifecycle of the information we consume.

The Role of LLMs in Information Triangulation

Large Language Models (LLMs) offer a double-edged sword for security researchers. They are prone to hallucinations and have their own built-in biases, which makes them dangerous if used as a primary source of truth. However, they are excellent at processing large volumes of unstructured data.

A researcher can use an LLM to summarize multiple, conflicting reports on a single incident. By asking the model to identify discrepancies between the reports, you can quickly surface the areas where the narrative is thin or where the evidence is contradictory. This does not replace human analysis, but it accelerates the process of identifying where you need to dig deeper.

Practical Steps for the Field

If you are on a red team engagement or conducting a bug bounty, you are constantly being fed information by the target. They might provide you with documentation, error messages, or even "leaked" internal communications. Always ask: who benefits from me believing this?

  1. Identify Key Assumptions: Write down the core assumptions you are making about the target's infrastructure or the threat landscape.
  2. Devil’s Advocacy: Actively try to prove your own assumptions wrong. If you assume a server is vulnerable to a specific exploit, look for evidence that it is patched or that the environment is hardened.
  3. Source Evaluation: Assess the independence and past performance of your data sources. A vendor advisory is useful, but it is also marketing. A blog post from a researcher is useful, but it is also an opinion.
  4. Monitor for Feedback: Attackers often monitor how their targets react to their deceptions. If you see signs that your research is being steered in a specific direction, assume you are being watched.

The goal is to build a more resilient analytical process. We cannot eliminate the risk of deception, but we can make it significantly more expensive for the attacker to succeed. By applying the same level of technical scrutiny to the information we consume as we do to the code we execute, we can ensure that our security decisions are based on reality rather than the narratives others want us to believe. The next time you find yourself nodding along to a threat report, stop and ask what facts might be missing.

Talk Type
research presentation
Difficulty
intermediate
Category
threat intel
Has Demo Has Code Tool Released


DEF CON 32

260 talks · 2024
Browse conference →
Premium Security Audit

We break your app before they do.

Professional penetration testing and vulnerability assessments by the Kuboid Secure Layer team. Securing your infrastructure at every layer.

Get in Touch
Official Security Partner
kuboid.in