Black Hat Europe 2023 Closing Panel
This panel discussion explores the intersection of cybersecurity research, policy, and the practical challenges of vulnerability disclosure. The speakers analyze the impact of software supply chain security, the role of independent researchers, and the implications of emerging regulations like the Digital Millennium Copyright Act. The discussion highlights the tension between security research and legal frameworks, emphasizing the need for better transparency and standardized disclosure processes.
The Supply Chain Blind Spot: Why Your SBOM Is Just a Paper Trail
TLDR: Software supply chain security is currently failing because we treat SBOMs as compliance checklists rather than actionable intelligence. The industry is stuck in a cycle of reactive patching while ignoring the deeper architectural risks of third-party dependencies. Pentesters and researchers need to shift focus from simple version matching to data-flow analysis to identify where actual risk resides in their target environments.
Software supply chain security has become the industry's favorite buzzword, but the actual practice remains a mess of incomplete data and false confidence. We are currently obsessed with generating Software Bill of Materials (SBOM) files, yet we rarely ask what happens when those files are actually used in a real-world engagement. The reality is that most organizations are drowning in a sea of dependency alerts, most of which are noise, while the critical vulnerabilities—the ones that actually lead to remote code execution—remain buried in the "transitive" layers of the stack.
The Illusion of Visibility
During the recent panel discussions at Black Hat, the conversation kept circling back to a fundamental disconnect: the gap between what we document and what we actually run. When you look at a standard SBOM, you are looking at a static snapshot of a dependency tree. It tells you that a specific library is present, but it tells you absolutely nothing about whether that library is reachable, whether it is in the execution path, or whether it is even being invoked by the application.
For a pentester, this is the difference between a high-severity finding that gets ignored and a critical exploit that gets you a payout. If you are looking at CVE-2021-44228, the Log4Shell vulnerability, you saw the industry scramble to patch every instance of the library. But the real pros were the ones who could prove that the vulnerable code path was actually exposed to user input. Most of the time, the "vulnerable" component is sitting in a dead-code path, completely unreachable by an attacker. We need to stop treating every CVE as an immediate fire drill and start treating them as data points in a larger threat model.
Moving Beyond the Checklist
The current obsession with SBoMs and automated scanning tools like those found in the OWASP Dependency-Track project is a good start, but it is not a strategy. These tools are excellent at identifying A06:2021-Vulnerable and Outdated Components, but they lack the context of the application's runtime behavior.
If you are performing a red team engagement, your goal should be to map the data flow from the entry point to the sink. If you find a vulnerable library, don't just report the version number. Trace the execution. Can you reach the vulnerable function? Does the application sanitize the input before it hits that library? If you can demonstrate that a "critical" vulnerability is actually exploitable because of how the application handles data, you have moved from a scanner-monkey to a researcher.
The Legal and Policy Minefield
One of the most frustrating aspects of modern research is the legal friction. We are seeing more researchers get caught in the crosshairs of the Digital Millennium Copyright Act when they try to reverse-engineer proprietary firmware or cloud-based services to find these supply chain flaws. The panel highlighted a critical point: the law is currently designed to protect the vendor, not the security of the ecosystem.
When you are hunting for bugs in a complex supply chain, you are often dealing with "black box" components. You might find a vulnerability in a third-party module, but you have no way to report it because the vendor doesn't have a disclosure program. This is where the community needs to push back. We need to normalize the idea that if you ship code, you are responsible for the security of that code, regardless of where the individual modules came from.
What to Do Next
Stop relying on automated tools to tell you what is broken. Start looking at the architecture. If you are a developer, look at your build pipeline. Are you pulling in dependencies from public repositories without any verification? If you are a researcher, look for the gaps in the documentation. Where are the undocumented APIs? Where are the hidden configuration files that control how these third-party libraries behave?
The next time you are on an engagement, pick a high-value target and map its dependencies. Don't just run a scan. Look at the imports. Look at the configuration. Find the one library that is doing something it shouldn't be doing, and follow that thread. That is where the real bugs are hiding. The industry is going to keep chasing the low-hanging fruit of version numbers, but the real impact is found by those who understand the system better than the people who built it. Keep digging, keep questioning the assumptions, and don't let the compliance paperwork distract you from the actual code.
CVEs
Vulnerability Classes
Target Technologies
Attack Techniques
OWASP Categories
All Tags
Up Next From This Conference

A Security RISC? The State of Microarchitectural Attacks on RISC-V

REDIScovering HeadCrab: A Technical Analysis of a Novel Malware and the Mind Behind It

TsuKing: Coordinating DNS Resolvers and Queries into Potent DDoS Amplifiers
Similar Talks

Unmasking the Snitch Puck: The Creepy IoT Surveillance Tech in the School Bathroom

Anyone Can Hack IoT: A Beginner's Guide to Hacking Your First IoT Device

