Threat Modeling Star Wars Edition
This talk provides a practical, high-level overview of the threat modeling process using the Death Star as a case study. It details the five core phases of threat modeling: analyzing the existing system, designing data flow diagrams, identifying threats, brainstorming mitigation strategies, and continuous iteration. The presentation demonstrates how to apply the STRIDE framework to identify vulnerabilities and map them to specific security controls. It emphasizes the importance of adopting a security mindset to proactively identify potential failure points in system design.
Why Your Threat Model Is Failing (And How to Fix It)
TLDR: Most threat models are bloated, static documents that gather dust instead of finding bugs. By adopting a lightweight, iterative approach—like the one demonstrated in this breakdown of the Death Star’s architecture—you can identify critical failure points before a single line of code is deployed. This post explains how to map the STRIDE framework to your data flow diagrams to turn abstract design reviews into actionable security findings.
Security teams often treat threat modeling as a bureaucratic hurdle. They spend weeks drafting massive, unreadable documents that attempt to account for every theoretical risk, only to have those documents ignored by developers the moment the sprint starts. If your threat model isn't helping you find bugs, you aren't doing threat modeling; you’re just doing paperwork.
The most effective way to approach this is to stop treating the model as a static artifact and start treating it as a living part of your development lifecycle. Whether you are building a new microservice or refactoring an existing API, the goal is to identify how an attacker will break your assumptions.
The Five Phases of Practical Threat Modeling
Effective threat modeling relies on a repeatable, five-phase process that forces you to think like an adversary.
First, you must analyze the existing system. This isn't just about reading documentation. You need to understand the nominal function of the system, the services it consumes, and the security controls already in place. If you don't know what your system is supposed to do, you have no hope of figuring out how it can be made to fail.
Second, you design data flow diagrams (DFDs). These are the backbone of your model. By mapping out processes, data stores, external entities, and trust boundaries, you create a visual representation of the attack surface. If you can’t draw the flow of data, you can’t secure it.
Third, you identify threats. This is where you apply a framework like STRIDE. Don't try to cover everything at once. Focus on the trust boundaries you identified in your DFD. If a process crosses from an untrusted zone to a trusted one, that is where you should be looking for identification and authentication failures.
Fourth, you brainstorm mitigation strategies. Every threat you identify needs a corresponding control. If you find a potential for tampering, you need integrity checks. If you find a potential for spoofing, you need stronger authentication. Map these mitigations to your backlog as actionable work items.
Finally, you continuously iterate and verify. A threat model is never finished. As you deploy new features or change your infrastructure, your model must evolve. If you aren't updating your model, you are operating on outdated assumptions.
Applying the Model to Real-World Architecture
Consider the Death Star. It’s a massive, complex system with intricate, interdependent services. If you were a pentester tasked with assessing its security, you wouldn't start by looking at the entire station. You would start by looking at the data flow.
The Death Star’s primary weapon, the superlaser, is a process. It relies on a command station computer to receive input and trigger the firing sequence. An attacker doesn't need to destroy the entire station to achieve their goal; they only need to find the one vulnerability that allows them to bypass the station's defenses. In this case, the thermal exhaust port is a classic example of a design flaw that leads to broken access control.
When you map this to the STRIDE framework, you can see how an attacker would approach the problem:
- Spoofing: Impersonating a high-ranking officer to gain access to the command station.
- Tampering: Modifying the targeting data to misdirect the superlaser.
- Repudiation: Deleting logs to hide the fact that the superlaser was fired.
- Information Disclosure: Leaking the station's blueprints to the rebellion.
- Denial of Service: Jamming the communication channels to prevent the station from coordinating its defense.
- Privilege Escalation: Exploiting a low-level maintenance account to gain administrative access to the station's core systems.
Why This Matters for Your Next Engagement
For a pentester or bug bounty hunter, this process is how you find the "un-findable" bugs. Most automated scanners will miss logic flaws because they don't understand the business context of the application. By building a DFD, you can see the gaps in the developer's logic.
If you are testing an API, don't just run a fuzzer. Map the data flow. Where does the user input go? What processes handle that input? What trust boundaries does it cross? Once you have that map, you can start asking the right questions. Can I spoof a request? Can I tamper with the parameters? Can I access data I shouldn't be able to see?
Defenders can use this same process to prioritize their work. You cannot fix every vulnerability, but you can fix the ones that matter. By mapping your threats to your DFD, you can see which vulnerabilities pose the greatest risk to your most critical assets.
Stop writing reports that no one reads. Start building models that help you think. The next time you sit down to test an application, don't just look for the low-hanging fruit. Build a map, identify the trust boundaries, and find the thermal exhaust port that the developers missed. That is where the real bugs are hiding.
Vulnerability Classes
Target Technologies
Attack Techniques
Up Next From This Conference
Similar Talks

The Dark Side of Bug Bounty

Thinking Like a Hacker in the Age of AI




