BSidesSF Plays Incident Response
This talk presents a gamified, interactive tabletop exercise designed to simulate the decision-making process during a security incident. It focuses on the critical phases of incident response, including initial triage, investigation, containment, and communication strategies. The session highlights the importance of cross-functional collaboration between security, legal, and executive teams when managing sensitive data breaches. The presenters emphasize the necessity of pre-established incident response plans and clear communication cadences to mitigate organizational risk.
Beyond the Shell: Why Your Incident Response Plan Is Failing
TLDR: Most incident response plans fall apart the moment a breach moves from a technical alert to a cross-functional crisis. This post breaks down the critical failure points in incident triage, specifically how legal, executive, and security teams misalign during data leaks. If you aren't running tabletop exercises that force these teams to communicate under pressure, you are already behind.
Security researchers often treat incident response as a post-mortem activity or a checkbox for compliance. We spend our time hunting for the exploit, chaining the SSRF to RCE, and dumping the database. But the moment the alarm sounds, the technical work is only half the battle. The real chaos begins when you have to decide whether to shut down a revenue-generating service, how to handle a potential OWASP A01:2021-Broken Access Control vulnerability that leaked 20,000 customer IDs, and who actually has the authority to pull the plug.
The Triage Trap
The most common failure in incident response is the lack of a single, defined source of truth for triage. In many organizations, the security team finds a bug, the engineering team tries to patch it, and the legal team is left in the dark until a reporter calls. This is a recipe for disaster.
During a recent simulation, we saw how quickly a simple data leakage issue—where a user’s ID scan was incorrectly associated with another user’s profile—escalated into a corporate crisis. The technical fix was straightforward: identify the logic error in the KYC flow and patch it. However, the incident response process stalled because the team didn't know how to handle the legal implications of the leak.
When you are in the middle of an engagement, you need to know exactly who is authorized to declare an incident. If your team has to wait for a manager to approve a ticket before they can start the investigation, you are losing hours of critical time. Establish a single Slack channel or official incident management tool where all communication happens. If it isn't in the channel, it didn't happen.
Communication is a Technical Skill
We often assume that if we have the technical logs, we have the situation under control. But logs don't tell you if you are in an open trading window or if you are about to violate a regulatory reporting requirement.
When you encounter a vulnerability that involves sensitive user data, your first instinct might be to keep it quiet until you have a full root cause analysis. This is almost always the wrong move. If you are a public company, you have specific obligations to report security incidents. If you don't loop in your legal counsel early, you risk turning a manageable technical bug into a massive litigation headache.
Use a "security@" email alias that is monitored by both security and legal teams. This ensures that when a vendor or a researcher reaches out with a report, the right people are in the loop from the start. Don't wait for the "all clear" to involve the people who actually understand the business risk.
The Art of the Tabletop
If you haven't run a tabletop exercise in the last six months, you don't have an incident response plan. You have a document that will be ignored the moment the production environment starts burning.
A good tabletop exercise isn't about testing if your engineers can write a patch. It’s about testing if your CISO, your lead developer, and your general counsel can make a decision when they have incomplete information.
During these exercises, force the team to answer the hard questions:
- How many people are affected?
- Where are they located?
- What is the timeline of the breach?
- Is the access to data temporary or ongoing?
If you can't answer these questions within the first hour of an incident, your monitoring and logging strategy needs a complete overhaul. You need to be able to pull logs that show exactly what data was accessed and by whom. If you are using Splunk or ELK, ensure your queries are tuned for incident response, not just performance monitoring.
Why You Need to Talk to Your Blue Team
As pentesters, we often view the blue team as the people we need to bypass. But in a real-world scenario, they are the ones who will be dealing with the fallout of our findings. When you find a critical vulnerability, don't just drop a report and walk away. Sit down with the defenders and walk them through the attack flow.
If you can show them exactly how you bypassed their access controls, they can build better detection rules. If you can show them how you exfiltrated the data, they can implement better egress filtering. The goal is to make the next incident easier for them to manage.
Remember that incident response is everyone's responsibility. If you are a researcher, your job doesn't end when you get the bounty. It ends when the organization has the information they need to prevent the next breach.
Stop treating incident response as a separate, boring administrative task. It is the most high-stakes part of our job. The next time you are on an engagement, look at the client's response process. Are they prepared, or are they just waiting for the next disaster to happen? If they aren't prepared, tell them. It might be the most valuable part of your report.
Vulnerability Classes
Target Technologies
Attack Techniques
OWASP Categories
Up Next From This Conference
Similar Talks

The Dark Side of Bug Bounty

Social Engineering A.I. and Subverting H.I.




