Kuboid
Open Luck·Kuboid.in
Security BSides2025
Open in YouTube ↗

Secure Designs, UX Dragons, Vuln Dungeons

Security BSides San Francisco424 views43:4610 months ago

This panel discussion explores the intersection of secure design principles, user experience (UX) challenges, and the prevalence of common vulnerabilities in modern software development. The speakers analyze how insecure defaults and poor UX choices contribute to critical security flaws, emphasizing the need for secure-by-default configurations. They discuss the role of emerging technologies like LLMs in both introducing and mitigating vulnerabilities, while advocating for a shift toward outcome-based security metrics rather than just effort-based compliance.

Beyond the Checklist: Why Secure Defaults Are Your Best Bug Bounty Defense

TLDR: Security panels often devolve into abstract theory, but this discussion cuts through the noise to focus on the real-world impact of insecure defaults and poor UX. By analyzing how common vulnerabilities like SSRF and command injection stem from design flaws rather than just coding errors, the speakers argue for a shift toward outcome-based security. For researchers and testers, the takeaway is clear: stop hunting for bugs in isolation and start mapping the architectural decisions that make those bugs inevitable.

Modern application security is suffering from a crisis of confidence. We spend thousands of hours running automated scanners and filing reports for low-hanging fruit, yet the same classes of vulnerabilities—A03:2021-Injection and A01:2021-Broken Access Control—continue to dominate the landscape. The problem isn't that developers don't know how to write secure code; it is that the frameworks and defaults they use are often actively hostile to security. When a default configuration exposes sensitive metadata or allows arbitrary command execution, you aren't looking at a coding mistake. You are looking at a failure of design.

The Architecture of Insecure Defaults

The most dangerous vulnerabilities are the ones that come pre-packaged with the software. Consider the classic case of CVE-2021-44228, the Log4j remote code execution flaw. While the technical exploit was a masterclass in JNDI lookup abuse, the real issue was that the library enabled such dangerous functionality by default. When a developer imports a library, they expect it to perform its primary function—logging—not to reach out to an external LDAP server and execute arbitrary Java classes.

This is the core of A04:2021-Insecure Design. As a researcher, if you are only looking for where a user input hits a database, you are missing the bigger picture. You need to look at the "UX of the API." If the documentation or the default implementation encourages a pattern that leads to Server-Side Request Forgery (SSRF), that is a design flaw. During a pentest, I often find that the most critical findings aren't hidden behind complex obfuscation; they are sitting in plain sight, enabled because the vendor thought it would be "convenient" for the user.

When UX Becomes a Security Vector

We often treat UX and security as separate silos, but they are deeply intertwined. If a security control is too difficult to implement, developers will bypass it. If a configuration option is confusing, they will choose the one that makes the application work, regardless of the risk. This is why we see so many instances of hardcoded credentials or overly permissive IAM roles in cloud environments.

Take the example of AWS EC2 metadata services. If an application is designed to query the metadata service without proper restrictions, it becomes a goldmine for an attacker who has achieved even a limited SSRF. The vulnerability isn't just the SSRF; it is the fact that the environment was designed to trust any request coming from the local instance. When you are testing an application, ask yourself: "What is the path of least resistance for the developer?" If that path leads to a vulnerability, you have found a systemic issue that is likely present across the entire codebase.

The LLM Factor in Modern Development

Emerging tools like LLMs are changing the velocity of development, but they are also accelerating the rate at which insecure patterns are propagated. When a developer asks an AI to "write a function to fetch data from a URL," the AI will often provide a snippet that is syntactically correct but security-blind. It won't warn the developer about validating the host, checking for private IP ranges, or implementing a timeout.

This creates a feedback loop where insecure design patterns are codified into the application's foundation. As testers, we need to adapt. We can no longer rely solely on manual code review. We need to look at the output of these tools and identify the "hallucinated security" that developers are blindly trusting. If you are auditing a new feature, check if it was generated by an LLM. You will often find that the logic is sound, but the security guardrails are entirely absent.

Moving Toward Outcome-Based Security

Compliance frameworks like SOC2 are useful for business, but they are poor proxies for actual security. A company can be fully compliant and still be completely vulnerable to a simple command injection. We need to stop measuring our success by the number of tickets closed or the number of scans completed. Instead, we should be measuring the "time to exploit" for a given class of vulnerability.

If you can demonstrate that a specific design choice—like using a specific library or a specific API pattern—consistently leads to a high-severity finding, you have the leverage to force a change. This is the difference between being a "bug hunter" and being a "security partner." When you present a finding, don't just show the payload. Show the architectural decision that allowed the payload to work.

What Comes Next

The next time you are on an engagement, look past the immediate vulnerability. If you find a SQL injection, don't just report it and move on. Trace it back to the ORM or the database abstraction layer. Is there a configuration that could have prevented it? Is there a default that should have been disabled?

The goal is to move the needle from "finding bugs" to "fixing the system." Security is not a state you reach by checking boxes; it is a continuous process of refining your architecture to make the secure path the easiest one to take. If you can make it harder for a developer to write insecure code than to write secure code, you have done your job. Start by questioning the defaults, and you will find that the most interesting bugs are the ones that were designed into the system from the start.

Talk Type
panel
Difficulty
intermediate
Category
web security
Has Demo Has Code Tool Released


BSidesSF 2025

94 talks · 2025
Browse conference →
Premium Security Audit

We break your app before they do.

Professional penetration testing and vulnerability assessments by the Kuboid Secure Layer team. Securing your infrastructure at every layer.

Get in Touch
Official Security Partner
kuboid.in