Fireproof Your Castle with Risk-First GRC
This talk presents a risk-first approach to Governance, Risk, and Compliance (GRC) for security teams. It demonstrates how to move beyond compliance-driven checklists by using quantitative risk modeling, such as Monte Carlo simulations, to prioritize security investments. The speakers explain how to decompose risks into threat, asset, and impact components to provide actionable data for decision-makers. The presentation emphasizes balancing qualitative and quantitative methods to effectively manage organizational risk.
Why Your GRC Program is Failing to Stop Real Attacks
TLDR: Most GRC programs rely on compliance-driven checklists that fail to account for actual risk, leaving critical vulnerabilities exposed. By shifting to a risk-first model that decomposes threats into assets and impacts, security teams can prioritize defenses that actually matter. This approach uses quantitative modeling to move beyond static spreadsheets and into data-driven decision-making.
Security teams often treat compliance as the finish line. We spend months mapping controls to frameworks like SOC 2 or PCI DSS, checking boxes to satisfy auditors, and then wonder why we still get popped by simple, preventable attacks. The reality is that compliance is a baseline, not a security strategy. If your GRC program is just a collection of spreadsheets and audit evidence, you are not managing risk; you are managing paperwork.
The Problem with Compliance-First Thinking
When you lead with compliance, you start with policies and requirements. You look at a framework, see a requirement for "access control," and implement a solution. This is a top-down approach that assumes the framework knows your business better than you do. It creates a false sense of security because it ignores the specific, messy reality of your infrastructure.
A risk-first approach flips this. You start by asking, "What can go wrong with this asset?" You identify your crown jewels—the data or systems that, if compromised, would actually hurt the business. Then, you bring in the experts to model the threats. You aren't looking for a gap in a control; you are looking for a path to a catastrophic event.
Decomposing Risk into Actionable Data
Risk is not a single number or a color on a heatmap. It is a combination of a threat, an asset, and an impact. If you are missing one of these, you aren't looking at a risk; you are looking at noise.
To make this actionable, you need to move toward quantitative modeling. This is where Monte Carlo simulations become incredibly powerful. Instead of saying a risk is "High," you run a simulation to determine the probable frequency and financial impact of an event over a specific period.
For example, if you are modeling the risk of "Deletion of Critical Assets," you don't just guess. You define the variables:
- Likelihood: What is the probability of a malicious insider or an external actor with compromised credentials executing this?
- Impact: What is the financial cost of the downtime, data recovery, and potential regulatory fines?
By running 50,000 simulations, you get a distribution of outcomes. Most of the time, nothing happens. But the tail end of that distribution shows you the existential threats. This is the data that leadership understands. They don't care about a "Medium" risk rating on a spreadsheet, but they do care about a 10% chance of a $9 million loss.
Bridging the Gap Between Security and Business
Pentesters and researchers often struggle to communicate the "why" behind their findings to non-technical stakeholders. We show them a high-severity bug, and they ask, "So what?"
When you use a risk-first model, you can answer that question with precision. You can show that a specific vulnerability in a cloud-infrastructure component isn't just a bug; it is a direct contributor to a risk scenario that has a measurable impact on the company's bottom line. This changes the conversation from "We need to patch this because it's a critical CVE" to "We need to fix this because it reduces the probability of a $4 million loss by 25%."
Implementing a Hybrid Approach
You don't need to abandon qualitative analysis entirely. It is still the best tool for triaging a large volume of risks. Use qualitative methods to get a quick sense of the landscape, then use quantitative methods like FAIR (Factor Analysis of Information Risk) to dive deep into the risks that actually threaten the business.
The goal is to be a partner to the business, not a blocker. When you present a control, don't just say it's required by a framework. Explain how it reduces the likelihood or impact of a specific, identified risk. If a control is too expensive or creates too much friction for your engineers, you can use your risk model to see if the reduction in risk is actually worth the cost.
Moving Forward
Stop letting auditors dictate your security roadmap. If you are a pentester or a researcher, start looking at the GRC side of your organization. Ask to see the risk register. If it’s just a list of compliance gaps, you have an opportunity to help them build something better.
Start by picking one critical asset and modeling the risk associated with it. Use the data to drive your next engagement. When you can prove that your testing is focused on the risks that keep the CEO up at night, you stop being a cost center and start being a strategic asset. The security of your organization depends on your ability to connect the technical reality of an exploit to the business reality of risk. Don't just find the bugs; explain why they matter.
Target Technologies
Up Next From This Conference
Similar Talks

Thinking Like a Hacker in the Age of AI

What Does an LLM-Powered Threat Intelligence Program Look Like?




