How Computers Kill People: Maritime Systems
This talk demonstrates how software-defined control systems in maritime environments are susceptible to catastrophic failure through the manipulation of setpoints. It highlights how the integration of modern, networked IT components into legacy critical infrastructure introduces new attack vectors and risks. The presentation emphasizes that poor software quality and lack of robust engineering practices in these systems can lead to real-world physical damage, such as ship blackouts and loss of propulsion. A simulation is used to illustrate how a simple setpoint manipulation can trigger a cascading failure across multiple critical engine systems.
Why Your Next Industrial Control System Audit Needs to Include Setpoint Manipulation
TLDR: Modern maritime vessels are increasingly reliant on networked, software-defined control systems that lack basic fail-safe mechanisms. By manipulating setpoints within these systems, an attacker can trigger cascading physical failures, such as engine blackouts, without ever needing to exploit traditional software vulnerabilities. Security researchers and auditors must shift their focus from perimeter defense to the integrity of the control logic itself to prevent catastrophic physical outcomes.
Maritime infrastructure is undergoing a massive shift. We are moving away from mechanical, air-gapped systems toward highly integrated, networked environments where IT and OT converge. While this digitization promises efficiency, it also introduces a massive, often overlooked attack surface. The recent research presented at DEF CON 2025 on maritime systems highlights a critical reality: when code controls the physical world, a logic error is just as dangerous as a remote code execution exploit.
The Danger of Logic Over Vulnerability
Most security professionals are conditioned to hunt for memory corruption, buffer overflows, or authentication bypasses. In the context of industrial control systems, these are certainly risks, but they are not the only ones. The real danger often lies in the intended functionality of the system. If a system is designed to allow a user to change a temperature setpoint, and that system does not validate the safety of that input, the system will execute the command regardless of the physical consequences.
This is not a bug in the traditional sense. It is a failure of engineering practice. When you replace a mechanical thermostat with a programmable logic controller (PLC) that has a web interface, you are not just adding a feature. You are adding a network-accessible control point for a physical process. If that interface is poorly secured or if the underlying logic assumes that all inputs are safe, you have created a path for an attacker to manipulate the physical state of the vessel.
Cascading Failures in Engine Management
The simulation demonstrated during the talk provides a clear example of how this plays out. The researchers used a simulator for a RoPax vessel, which carries both cargo and passengers. The ship’s power management system controls four diesel generators. These generators are cooled by a complex, multi-stage system involving seawater and freshwater heat exchangers.
The critical vulnerability here is the lack of input validation on the setpoints for the cooling system. An attacker with access to the control network can modify the temperature setpoint for the heat exchanger valve. By setting this value to an extreme, the attacker forces the valve into a state that prevents proper cooling. Because the software is designed to prioritize the setpoint over physical safety, the engine management system does not intervene until it is too late.
The result is a cascading failure. As the engine overheats, the emergency shutdown system triggers. Because the four engines are managed by a single, centralized automation system, the shutdown propagates across all four generators. Within 90 seconds, the ship experiences a total blackout and loss of propulsion. This is not a theoretical scenario; it is a direct consequence of trusting software to manage critical physical processes without robust, hardware-level fail-safes.
Testing for Setpoint Manipulation
For a pentester or bug bounty hunter, the engagement model for these systems needs to change. You are not just looking for an entry point; you are looking for the control logic. During an assessment, focus on the following:
- Identify the Control Interfaces: Map out all web interfaces, HMI panels, and network-accessible management consoles.
- Analyze the Logic: Look for parameters that control physical processes. Can you modify these values? What happens when you input values outside of the normal operating range?
- Test for Fail-safes: Does the system have any mechanism to reject dangerous commands? If you set a temperature to 700 degrees, does the system flag it, or does it attempt to reach that temperature?
- Understand the Dependencies: How do different systems interact? A failure in a cooling system should not necessarily lead to a total engine shutdown, but if the systems are tightly coupled, it often does.
If you are working in this space, familiarize yourself with the IEC 62443 series of standards. These provide a framework for secure industrial automation and control systems. While they are not a silver bullet, they are the baseline for understanding how these systems should be designed and audited.
The Defensive Imperative
Defenders must stop treating OT environments like standard IT networks. The priority in an OT environment is availability and safety, not just confidentiality. Implementing Zero Trust principles is a start, but it is insufficient if the control logic itself is flawed.
The most effective defense is to implement hardware-level interlocks that are independent of the software. If the software fails, the hardware should be capable of maintaining the system in a safe state. Furthermore, all setpoint changes should be logged, audited, and, where possible, require multi-factor authorization.
We are at a point where the distinction between a software bug and a physical accident is disappearing. As we continue to integrate more complex, networked systems into our critical infrastructure, the responsibility falls on us to ensure that these systems are not just efficient, but safe. The next time you are auditing an industrial system, look past the CVEs and ask yourself: what happens if I change this value? The answer might be more significant than you think.
Vulnerability Classes
Tools Used
Target Technologies
Attack Techniques
All Tags
Up Next From This Conference

Maritime Hacking Village Panel: Cyber Policy and National Security

Taiwan Resilience Project: Critical Infrastructure Security

State of the Pops: Mapping the Digital Waters
Similar Talks

Unmasking the Snitch Puck: The Creepy IoT Surveillance Tech in the School Bathroom

Unsaflock: Unlocking Millions of Hotel Locks

