How I Learned to Stop Worrying and Love Build a Modern Detection & Response Program
This talk outlines a strategic framework for building a modern, proactive detection and response program by shifting from reactive, tool-centric models to business-aligned, data-driven operations. It emphasizes the use of organizational design principles, such as threat modeling and micro-purple testing, to validate detection efficacy and prioritize security investments. The speaker provides a methodology for mapping security capabilities to business risks, enabling teams to move beyond simple alert-based metrics to meaningful performance reporting.
Stop Chasing Alerts: A Framework for Building Proactive Detection Programs
TLDR: Most security operations centers are stuck in a reactive loop of chasing alerts and managing tool sprawl. This talk provides a concrete framework for shifting to a proactive, business-aligned detection program using organizational design, threat modeling, and micro-purple testing. By mapping security capabilities to business risks, teams can move from measuring alert volume to demonstrating actual defensive efficacy.
Security teams often fall into the trap of measuring success by the number of alerts they process or the number of tools they have deployed. This is a losing game. If your primary metric is how many tickets your team closed last week, you are not running a security program; you are running a factory that produces noise. The reality is that most organizations are drowning in telemetry while remaining blind to the actual threats that matter to their specific business context.
Moving Beyond Tool-Centric Security
Many programs are built on a foundation of "we bought this, so we must use it." This leads to a fragmented architecture where tools are siloed, and the team spends more time managing vendor configurations than hunting for adversaries. A modern detection and response program requires a shift in philosophy. Instead of starting with the tool, start with the business.
You need to understand what is unique about your environment. What are the crown jewels? What are the specific threat actors targeting your industry? Once you have that context, you can map your defensive capabilities to the MITRE ATT&CK framework. This allows you to identify where you have coverage and, more importantly, where you have massive, unaddressed gaps.
The Power of Micro-Purple Testing
One of the most effective ways to validate your program is through micro-purple testing. This is not about running a massive, week-long red team engagement that leaves everyone exhausted. It is about running small, targeted simulations that test specific detection logic.
If you have a detection for a specific technique, such as T1059.001 (PowerShell execution), you should be able to run a controlled test to see if your SIEM or EDR actually fires an alert. If it does not, you have a gap. If it does, you can then test if your incident response team knows how to triage that specific alert. This iterative process is how you build a mature program. You are not just checking a box; you are continuously improving your ability to detect and respond to real-world activity.
To get started, look at the MITRE D3FEND matrix. It provides a catalog of defensive techniques that you can map against the offensive tactics you are trying to mitigate. It is a practical way to move from theoretical security to a concrete, testable architecture.
Operationalizing Your Data
When you are ready to report to leadership, stop showing them charts of "total alerts blocked." They do not care. They care about business risk. Instead, frame your reporting around the threats you are seeing and the controls you have in place to stop them.
If you are dealing with a high volume of phishing attempts, do not just report the number of emails blocked. Report on the effectiveness of your email security controls and the time it takes for your team to identify and contain a successful phish. Use the Tines SOC Automation Matrix to identify which parts of your triage process are manual and ripe for automation. By automating the repetitive, low-value tasks, you free up your analysts to focus on the complex investigations that actually require human intuition.
Building a Roadmap for Success
If you are currently stuck in a legacy, reactive model, do not try to fix everything at once. Start by declaring a form of bankruptcy on your existing, noisy alerts. If an alert does not lead to an investigation or a meaningful defensive action, turn it off. It is just noise.
Once you have cleared the deck, focus on one or two high-impact areas. Use the Snowflake Detection Series as a reference for how to build detections that are rooted in data engineering and software development principles. This approach treats your detection logic like code: it should be version-controlled, tested, and continuously improved.
The goal is to build a program that is not just busy, but effective. You want to reach a state where you are not just reacting to the latest headline, but actively shaping your environment to make it harder for an attacker to succeed. Stop worrying about the volume of your logs and start loving the process of building a program that actually detects the things that matter. If you are not testing your detections, you are just guessing. Start testing today.
Attack Techniques
All Tags
Up Next From This Conference

A Security RISC? The State of Microarchitectural Attacks on RISC-V

REDIScovering HeadCrab: A Technical Analysis of a Novel Malware and the Mind Behind It




