Kuboid
Open Luck·Kuboid.in
Black Hat2025
Open in YouTube ↗

Agentic AI and Identity: The Biggest Problem We're Not Solving

Black Hat2,315 views47:125 months ago

This talk explores the emerging security risks associated with the integration of agentic AI into enterprise environments, specifically focusing on identity and access management challenges. It highlights how autonomous agents expand the attack surface, creating new vectors for identity spoofing, shadow AI, and unauthorized privilege escalation. The speaker emphasizes the urgent need for robust identity frameworks, threat modeling, and incident response planning to mitigate risks posed by these digital employees. The presentation serves as a strategic call to action for security professionals to develop new taxonomies and controls for managing agentic AI identities.

Why Your Next Privilege Escalation Will Come From an AI Agent

TLDR: Agentic AI is rapidly moving from experimental labs into production environments, creating a massive, unmanaged identity surface. These autonomous agents often operate with high-level permissions, making them prime targets for credential theft and privilege escalation. Security teams must treat AI agents as distinct identities within their IAM architecture to prevent them from becoming the primary vector for lateral movement.

Enterprise security is currently undergoing a shift that most teams are ignoring. We spent the last decade obsessing over human identity, building out complex MFA flows, and tightening conditional access policies. Now, we are handing the keys to the kingdom to autonomous agents. These systems are not just chatbots; they are functional entities that can execute code, access APIs, and make decisions on behalf of users. When an agentic AI is compromised, the attacker does not need to phish a human or bypass a hardware token. They simply need to hijack the agent’s identity.

The New Identity Perimeter

Traditional identity models assume a human is behind the keyboard. We verify that human, grant them a scope, and monitor their behavior. Agentic AI breaks this model because the "user" is a piece of software with the autonomy to perform tasks across multiple systems. When you integrate these agents into your stack, you are essentially creating a new class of privileged service accounts that are far more complex than the static API keys we used to manage.

The risk here is not just that an agent might hallucinate a bad command. The risk is that an attacker can use Broken Access Control techniques to manipulate the agent's decision-making process. If an agent has access to a CI/CD pipeline or a cloud management console, an attacker who gains control over the agent’s execution environment effectively inherits those permissions. This is not a theoretical future problem. As organizations rush to deploy agents for finance, sales support, and IT automation, they are creating a massive, unmonitored attack surface.

Why Agentic AI is a Privilege Escalation Goldmine

During a red team engagement, we look for the path of least resistance. Historically, that meant finding a misconfigured S3 bucket or a hardcoded credential in a repo. In an environment with agentic AI, the path of least resistance is the agent itself. These agents often require broad read/write access to function effectively. If an agent is designed to "automate incident response," it likely has permissions to modify firewall rules, disable accounts, or pull logs.

Consider the flow:

  1. An attacker gains initial access to a low-privilege environment.
  2. They identify an AI agent that has access to a higher-privileged API.
  3. They perform prompt injection or manipulate the agent's context to force it to execute an unauthorized action.
  4. The agent, acting as a trusted entity, performs the action, bypassing standard user-based security controls.

This is essentially T1098-account-manipulation on steroids. Because the agent is an automated system, it lacks the "human intuition" to flag a request as suspicious. It simply follows the logic defined by its instructions and its current context.

Managing the Agentic Identity Lifecycle

Defending against this requires a fundamental change in how we handle IAM. We cannot continue to treat agents as simple scripts. They need their own identity lifecycle. This means implementing Zero Trust principles specifically for non-human entities.

Every agent must have:

  • A unique, verifiable identity.
  • A strictly scoped set of permissions that follows the principle of least privilege.
  • Comprehensive logging that captures not just the action taken, but the context and the "reasoning" behind the agent's decision.

If you are a pentester, start looking at the agents in your target's environment. How are they authenticated? Can you influence their input to perform actions they shouldn't? If you are a defender, look at the NIST AI Risk Management Framework and start mapping your agentic deployments to it. We are currently in the "wild west" phase of AI integration. The organizations that survive this transition will be the ones that stop treating AI agents as magic boxes and start treating them as high-risk, high-privilege identities.

The Path Forward

The industry is already seeing the first wave of legal and regulatory pressure. The New York Department of Financial Services has already issued guidance that explicitly calls out the need for better management of identity and access. This is a clear signal that regulators are catching up to the reality of automated threats.

Do not wait for a breach to force your hand. Start by auditing your current AI integrations. Identify every agent, map its permissions, and determine what happens if that agent is compromised. If you cannot answer those questions, you are already behind. The goal is not to stop innovation, but to ensure that when we build these autonomous systems, we are not building a back door for every attacker on the internet. Focus on the identity, secure the context, and keep your humans in the loop where it matters most.

Talk Type
keynote
Difficulty
intermediate
Has Demo Has Code Tool Released


SecTor 2025

30 talks · 2025
Browse conference →
Premium Security Audit

We break your app before they do.

Professional penetration testing and vulnerability assessments by the Kuboid Secure Layer team. Securing your infrastructure at every layer.

Get in Touch
Official Security Partner
kuboid.in