JavaScript as Input: A New Attack Surface in Cloud Microservices
This talk explores the security implications of using JavaScript as an input format in cloud-native microservices, specifically focusing on how it can be leveraged for remote code execution. The researchers demonstrate how JavaScript engines like V8, SpiderMonkey, and JavaScriptCore, when embedded in serverless functions, headless browsers, and database plugins, create a significant attack surface. They introduce a methodology for identifying and exploiting these vulnerabilities by manipulating engine configurations and flags. The presentation also proposes using large language models to automate the identification of version-specific vulnerabilities and the generation of test cases for these environments.
When JavaScript Is the Input: Exploiting Engine-Level Vulnerabilities in Cloud Microservices
TLDR: Cloud-native microservices often process user-supplied JavaScript, creating a massive, overlooked attack surface. By manipulating engine configurations and flags in environments like AWS Lambda or database plugins, researchers can achieve remote code execution and sandbox escapes. This research highlights the critical need to treat JavaScript execution environments as untrusted boundaries rather than safe execution sandboxes.
Modern cloud architectures rely heavily on microservices that perform complex data processing, often by executing user-supplied scripts. While we spend significant time auditing API endpoints for Injection vulnerabilities, we frequently treat the underlying execution engines—V8, SpiderMonkey, and JavaScriptCore—as immutable, secure black boxes. This assumption is dangerous. When these engines are embedded into serverless functions, headless browsers, or database plugins, they become the primary target for attackers looking to break out of the application layer and into the host environment.
The Mechanics of the Engine-Level Attack
The core issue lies in how these engines handle input. When a microservice accepts JavaScript as an input format, it often parses and executes that code within a specific environment. If the configuration of that environment is flawed, the script can interact with the host system in ways the developer never intended. The research presented at Black Hat 2024 demonstrates that the attack surface is not just the code itself, but the configuration flags passed to the engine during initialization.
Attackers can manipulate these flags to disable security features or enable experimental capabilities that were never meant for production. For example, if a headless browser like Puppeteer or a Selenium instance is running with elevated privileges or misconfigured sandbox settings, a malicious script can leverage these gaps to perform file system operations or network requests that should be restricted.
Consider a scenario where a microservice uses a database plugin to execute custom logic. If that plugin uses an outdated version of a JavaScript engine, it may be vulnerable to known exploits that allow for arbitrary code execution. The researchers showed that by identifying the specific version of the engine, an attacker can craft a payload that triggers a memory corruption vulnerability, leading to a full sandbox escape.
Identifying and Exploiting the Surface
For a pentester or bug bounty hunter, the first step is reconnaissance. You need to determine if the application is executing your input as JavaScript. Look for endpoints that accept script-like structures, such as JSON objects that contain logic, or configuration files that are parsed by the backend. Once you confirm execution, the goal is to probe the environment.
You can test for sandbox restrictions by attempting to access restricted objects or by triggering errors that reveal the engine's version and configuration. A simple payload to test for environment access might look like this:
// Check for access to the global object or file system
try {
const fs = require('fs');
console.log(fs.readdirSync('.'));
} catch (e) {
console.log('File system access restricted');
}
If the environment is misconfigured, you might find that you have access to modules you shouldn't, or that you can execute commands on the underlying container. The researchers emphasized that the most effective attacks often involve a combination of techniques: first, identifying the engine version, then checking for specific configuration flags, and finally, deploying a payload tailored to that environment's weaknesses.
The Role of Automation in Research
Manually identifying version-specific vulnerabilities across dozens of different microservices is a losing battle. The researchers introduced a methodology using large language models to automate this process. By feeding the model documentation, commit logs, and issue trackers from projects like the Chromium project, you can identify which versions of an engine are susceptible to specific types of attacks.
This approach is particularly useful for generating test cases. If you know a specific engine version has a flaw in its JIT compiler, you can ask an LLM to generate a series of JavaScript snippets designed to trigger that flaw. This significantly reduces the time required to move from initial discovery to a working proof-of-concept.
Defensive Strategies for Cloud-Native Apps
Defending against these attacks requires a shift in mindset. You cannot rely on the engine's default security settings. First, manage your software components rigorously. If you are using an embedded JavaScript engine, you must treat it as a dependency that requires regular patching. If you are not prepared to track and patch the engine, you should not be executing user-supplied JavaScript.
Second, limit the features available to the JavaScript environment. If your microservice only needs to perform basic data transformation, disable all unnecessary APIs and modules. Use the principle of least privilege for the container running the engine. If the engine does not need network access, block it at the network policy level.
Finally, be wary of experimental flags. Developers often enable these flags to test new features, but they can introduce significant security risks. Audit your deployment configurations to ensure that no unnecessary flags are enabled. If you are using a cloud provider's serverless functions, review their security documentation to understand the limitations of their execution environment and ensure you are not inadvertently weakening them.
The security of your application is only as strong as the weakest component in your stack. When you allow user input to dictate the execution flow of a JavaScript engine, you are essentially handing the keys to the kingdom to anyone who can craft a clever script. Stop treating these engines as safe sandboxes and start auditing them with the same intensity you apply to your own application code. The next time you see a microservice that accepts JavaScript, don't just look for XSS—look for the engine underneath.
Vulnerability Classes
Attack Techniques
OWASP Categories
All Tags
Up Next From This Conference

BestFit: Unveiling Hidden Transformers in Windows ANSI

Wi-Fi Calling: Revealing Downgrade Attacks and Not-so-private Private Keys

The CVSS Deception: How We've Been Misled on Vulnerability Severity
Similar Talks

Kill List: Hacking an Assassination Site on the Dark Web

Unmasking the Snitch Puck: The Creepy IoT Surveillance Tech in the School Bathroom

