Kuboid
Open Luck·Kuboid.in

Safe Harbor or Hostile Waters: Unveiling the Hidden Perils of the TorchScript Engine in PyTorch

DEFCONConference289 views34:466 months ago

This talk demonstrates how the TorchScript engine in PyTorch can be exploited to achieve Remote Code Execution (RCE) through insecure deserialization of model files. The researchers analyze the `torch.load` function and the `weights_only` parameter, revealing that improper handling of serialized objects allows for arbitrary code execution. The presentation highlights how popular AI frameworks like vLLM and Hugging Face Transformers were vulnerable to this attack vector. The researchers provide a proof-of-concept demonstrating how to bypass security mitigations and achieve RCE.

Remote Code Execution via Insecure Deserialization in PyTorch Model Loading

TLDR: Researchers at DEF CON 2025 demonstrated that PyTorch model files can be weaponized to achieve Remote Code Execution (RCE) by exploiting the torch.load function. Even with the weights_only parameter, attackers can bypass security controls in popular frameworks like vLLM and Hugging Face Transformers. Security teams must treat all untrusted model files as executable code and prioritize updating to patched versions of these libraries.

Machine learning models are the new binaries. For years, the security community has treated model files like static data, assuming that a .bin or .pt file is just a collection of weights and biases. This assumption is fundamentally broken. The research presented at DEF CON 2025 proves that loading an untrusted PyTorch model is effectively the same as executing an arbitrary script provided by an attacker.

The core of the issue lies in how PyTorch handles serialization. Historically, PyTorch relied on Python’s pickle module to save and load models. As any experienced researcher knows, pickle is inherently insecure because it allows for the execution of arbitrary code during the unpickling process. While PyTorch introduced the weights_only parameter to restrict what can be loaded, this talk reveals that the implementation is not the silver bullet many developers assumed it to be.

The Mechanics of the Exploit

The researchers focused on the torch.load function, which is the standard entry point for loading models. By analyzing the internal logic of the TorchScript engine, they identified that the weights_only parameter, while intended to restrict deserialization to safe types, can be bypassed.

The attack flow is straightforward for anyone familiar with deserialization vulnerabilities. An attacker crafts a malicious model file that, when processed by torch.load, triggers the execution of a payload. The researchers demonstrated this by creating a custom class with a __reduce__ method. When pickle encounters this object, it executes the code defined in the method.

import torch
import os

class EvilModel:
    def __reduce__(self):
        return (os.system, ('whoami',))

model = EvilModel()
torch.save(model, 'malicious.pt')

When a victim loads this file using torch.load('malicious.pt'), the system executes the whoami command. The researchers showed that even when weights_only=True is set, certain edge cases in how PyTorch handles complex objects allow the attacker to reach dangerous code paths.

Real-World Impact on AI Frameworks

This is not a theoretical bug confined to a lab environment. The researchers audited vLLM and Hugging Face Transformers, two of the most widely used libraries in the AI ecosystem. They found that both projects were vulnerable to RCE because they were loading model checkpoints in ways that could be manipulated.

In the case of CVE-2025-24357, the researchers discovered that the hf_model_weights_iterator could be exploited to achieve code execution. The fix, while seemingly simple, highlights the difficulty of securing these frameworks: developers often have to balance performance and compatibility with the inherent risks of Python’s serialization formats.

The researchers also identified CVE-2025-32434, which specifically addresses the RCE vulnerability in PyTorch itself. The patch involves stricter validation of the objects being loaded, but as the researchers pointed out, the history of pickle vulnerabilities suggests that finding new bypasses is often just a matter of time.

Why Pentesters Should Care

During a penetration test, you are likely to encounter AI-driven applications that ingest user-provided models. If you see an application that allows users to upload a model file for fine-tuning or inference, you have a high-probability target.

Do not assume that the application is safe just because it uses a modern framework. Check the version of PyTorch and the associated libraries. If they are outdated, you can likely achieve RCE using the techniques demonstrated in this research. Even if they are patched, look for ways to influence the model loading process. Are there other serialization formats in use? Is there a way to force the application to load a file from a remote source?

Defensive Strategies

Defending against this is difficult because the vulnerability is baked into the way Python handles data. The most effective strategy is to stop using pickle for model serialization entirely. The industry is moving toward Safetensors, a format designed specifically to be safe and fast by avoiding the execution of arbitrary code.

If you must use PyTorch, ensure you are running the latest version and that you have enabled weights_only=True everywhere possible. However, treat this as a defense-in-depth measure, not a complete solution. If you are building an application that processes models, run the loading process in a strictly isolated sandbox with no network access and minimal filesystem permissions.

The era of trusting model files is over. Every time you load a model, you are running code. Act accordingly.

Talk Type
research presentation
Difficulty
advanced
Has Demo Has Code Tool Released


DEF CON 33 Main Stage Talks

98 talks · 2025
Browse conference →
Premium Security Audit

We break your app before they do.

Professional penetration testing and vulnerability assessments by the Kuboid Secure Layer team. Securing your infrastructure at every layer.

Get in Touch
Official Security Partner
kuboid.in