HoloConnect AI: From Space to Biohacking
This presentation introduces HoloConnect AI, a holographic communication system designed for high-latency, low-bandwidth environments like space missions. The speaker demonstrates how the system operates offline at the edge, utilizing signed models and on-device logging to maintain integrity without cloud dependencies. The talk outlines a threat model for testing the system, including techniques like camera feed swapping, transport fuzzing, and spoofing tests. The speaker encourages security researchers to audit the system and submit pull requests to improve its security posture.
Breaking Down the Attack Surface of Edge-Based Holographic AI
TLDR: HoloConnect AI introduces a new attack surface by moving complex holographic processing from the cloud to local edge devices. Researchers can target this system by manipulating camera feeds, fuzzing the transport layer, and spoofing authentication tokens. This talk highlights the critical need to audit local AI models and their associated hardware interfaces before they become standard in high-stakes environments.
Holographic communication is no longer a trope from science fiction. It is being deployed in high-latency, mission-critical environments where traditional cloud-based AI latency is a non-starter. Aexa’s HoloConnect AI is designed to operate entirely at the edge, processing 3D presence data locally to maintain functionality when network links to the outside world are severed. While this architecture solves the physics problem of communication delay, it creates a massive, localized attack surface that security researchers are only beginning to map.
The Mechanics of Edge-Based Presence
Traditional AI assistants rely on a round-trip to the cloud for inference. You send a request, the server processes it, and you get a response. HoloConnect AI flips this model. By running inference on local hardware, the system eliminates the dependency on a stable, low-latency connection. This is a massive win for reliability, but it shifts the trust boundary. Instead of securing a remote API endpoint, you are now dealing with a physical device that holds the model, the processing logic, and the sensor data in one place.
The system captures 3D depth data rather than flat video, which is then processed by a local model. This model is signed, and the device maintains an Software Bill of Materials (SBOM) to track dependencies. From an offensive perspective, the goal is to determine how the system handles input validation when the "cloud" is no longer there to act as a filter.
Attacking the Local Pipeline
Testing this system requires a shift in mindset. You are not looking for a standard Injection vulnerability in a web form. You are looking for ways to poison the local data stream or bypass the integrity checks on the model itself.
The attack flow starts with the sensor input. If you can intercept the camera feed, you can inject malicious frames. Because the system is designed to "see" and understand context, a carefully crafted input could trigger unexpected behavior in the local inference engine.
Fuzzing the Transport Layer
The transport layer is a prime target for anyone looking to destabilize the device. Since the system must handle jitter and packet loss, the implementation of its communication protocol is likely complex. You can use standard fuzzing techniques to identify crashes or memory corruption issues in the transport stack.
# Example of a basic transport fuzzing approach
# Target the local interface using a custom packet generator
./fuzz_transport --target 192.168.1.50 --port 8888 --protocol holo-stream --iterations 100000
If you can force the device to drop into a fallback state, you might find that the security controls are less stringent than in the primary operating mode. This is a common pattern in embedded systems where "safe mode" is often synonymous with "unauthenticated mode."
Real-World Engagement Scenarios
Imagine you are performing a red team engagement for a facility using these devices for secure access or medical guidance. You would not start by trying to break the encryption. You would start by looking for physical access points or ways to spoof the device's identity on the local network.
If the device uses mutual authentication, your first step is to extract the client certificates. Once you have those, you can attempt to register a rogue device or replay captured traffic to the local controller. The impact here is significant. If you can successfully spoof a presence, you are not just bypassing a login screen; you are effectively "teleporting" into a secure space.
The Defensive Reality
Defending these systems is difficult because they are designed to be autonomous. The primary defense is to treat the device as an untrusted node, even if it is inside your perimeter. Ensure that the device is segmented from the rest of the network and that all traffic is inspected for anomalies.
The use of signed models is a good start, but it does not protect against adversarial inputs that are technically valid but semantically malicious. You must monitor the device's logs for signs of repeated crashes or unusual resource consumption, which are often the first indicators of a fuzzing attempt.
What Comes Next
The research into HoloConnect AI is still in its infancy. The developers have opened the door for the community to audit their work, which is a refreshing change from the typical "security through obscurity" approach. If you are looking for your next project, start by cloning the sandbox environment and mapping the attack surface.
Look for ways to manipulate the depth-sensing data. If you can find a way to trick the AI into misidentifying a person or an object, you have found a vulnerability that could have real-world consequences. The shift to edge-based AI is inevitable, but it is up to us to ensure that the security of these systems keeps pace with their capabilities. Don't just look for bugs in the code; look for the flaws in the logic that governs how these devices perceive the world.
Target Technologies
Attack Techniques
All Tags
Up Next From This Conference
Similar Talks

Anyone Can Hack IoT: A Beginner's Guide to Hacking Your First IoT Device

Counter Deception: Defending Yourself in a World Full of Lies




