Securing Cloud-Native AI Systems and Building Modern SOCs
This panel discussion explores the security challenges of integrating large language models (LLMs) into cloud-native environments and modern security operations centers (SOCs). The speakers analyze risks such as prompt injection, model hallucination, and the complexities of securing AI-driven automated triage systems. The discussion emphasizes the necessity of robust threat modeling, proper authorization controls, and the importance of maintaining human oversight in AI-augmented security workflows.
Beyond the Hype: Why Your AI-Augmented SOC is Leaking Data
TLDR: Integrating LLMs into security operations introduces critical risks, specifically prompt injection and unauthorized data access, which often go unmonitored. Security teams must treat AI models as untrusted software components rather than black-box solutions. Pentesters should prioritize testing the authorization boundaries between LLMs and internal data stores to identify potential data exfiltration paths.
Security teams are rushing to integrate large language models into their workflows, often treating these systems as magical black boxes that solve triage and analysis problems. This is a mistake. The recent panel at Security BSides 2025 highlighted a harsh reality: when you connect an LLM to your internal data, you are essentially giving an unvetted, hallucination-prone intern access to your most sensitive logs and databases. The industry is currently obsessed with the potential of AI, but we are ignoring the massive, gaping holes in the authorization models that govern these systems.
The Reality of AI-Driven Triage
Most modern security operations centers are experimenting with AI to automate incident triage. The workflow usually involves feeding raw logs or alerts into an LLM to summarize the threat or suggest a response. While this sounds efficient, it creates a direct path for T1190-exploit-public-facing-app style attacks to escalate into internal data exfiltration. If an attacker can influence the input—the logs or alerts—they can effectively perform a prompt injection attack against the model.
When the model processes this malicious input, it may inadvertently leak information from the internal data stores it is connected to. The problem is that many of these integrations lack granular OWASP A01:2021-Broken Access Control mechanisms. The LLM often runs with the permissions of the service account that invoked it, which is frequently over-privileged. If the model has read access to a database, it has that access regardless of the user who triggered the query.
Why Your Authorization Model is Failing
The core issue is that LLMs do not understand the concept of "user context" in the way traditional applications do. When you use tools like Claude or GitHub Copilot to assist in security tasks, you are often implicitly trusting the model to handle data correctly. However, the model is just a function. If you provide it with a prompt that includes sensitive data, that data becomes part of the model's context window.
Consider a scenario where a security engineer uses an AI-powered tool to query a PostgreSQL database containing customer PII. If the tool is poorly configured, the model might return more data than the engineer is authorized to see, or worse, it might store that data in its own history or training logs. We are seeing a shift where the "application" is no longer the code you wrote, but the prompt you sent to the model. If your prompt includes a database connection string or an API key, you have already lost.
Testing the AI Boundary
For a pentester, the goal is to map the trust boundary between the LLM and the backend systems. Start by looking for ways to influence the model's input. If you are testing an AI-augmented SOC, can you inject a malicious payload into a log file that the AI will eventually parse? If the AI summarizes that log, does it execute any commands or make external API calls?
You should also test for OWASP A07:2021-Identification and Authentication Failures. Does the AI system verify that the user requesting the summary actually has access to the underlying data? In many cases, the answer is no. The AI acts as a proxy, and if the proxy is not authenticated, the entire system is compromised.
Defensive Strategies for the Modern SOC
Defenders need to stop treating AI as a magic wand. The most effective defense is to implement strict, least-privilege access controls for any service account used by an LLM. If the model does not need access to a specific table, do not give it that access. Furthermore, implement input validation and output filtering. Just as you would sanitize a web form, you must sanitize the prompts sent to your models.
Finally, maintain human oversight. Never allow an LLM to perform automated remediation without a human in the loop. The risk of a model hallucinating a critical command or misinterpreting a threat is too high. If you are building these systems, you are responsible for the security of the data they touch. If you cannot audit the model's decisions, you should not be using it for security operations.
The industry is in a phase of rapid, often reckless, adoption. We are building on top of technologies that we do not fully understand, and we are doing it with the same security mindset we used for static web applications. This will not end well. If you are a researcher, start looking at the authorization layers of these AI integrations. If you are a founder or a lead, ensure your team is not just building for speed, but for security. The next big breach will likely come from an AI system that was given too much trust and not enough oversight.
Vulnerability Classes
Target Technologies
Attack Techniques
Up Next From This Conference
Similar Talks

Counter Deception: Defending Yourself in a World Full of Lies

Exploiting Shadow Data in AI Models and Embeddings




