The Evolution of Auth, from Users to AI Agents
This talk explores the historical progression of authentication mechanisms, from basic password-based systems to modern passwordless and AI-agent-driven authentication. It analyzes the security and usability trade-offs inherent in identity federation, multi-factor authentication, and API-based service authentication. The presentation provides actionable guidance for organizations and developers to implement secure, scalable authentication, including the use of OAuth scopes and WebAuthn. It highlights the emerging challenges of securing AI agents using protocols like Model Context Protocol (MCP) and Agent-to-Agent (A2A) authentication.
Why Your AI Agent’s OAuth Flow Is the Next Big Attack Vector
TLDR: AI agents are rapidly gaining the ability to act on behalf of users by requesting delegated access to enterprise resources via OAuth. This shift introduces a massive, often overlooked, attack surface where over-privileged tokens and lack of granular scope enforcement allow agents to exceed their intended permissions. Security researchers and pentesters must pivot their focus toward auditing these agent-to-resource authorization flows before they become the primary target for lateral movement.
Authentication has spent decades oscillating between two extremes: the friction of human-managed credentials and the convenience of automated, machine-to-machine tokens. We are currently witnessing the next phase of this cycle. As organizations rush to integrate AI agents into their workflows, they are effectively handing over the keys to their kingdom. These agents are not just reading text; they are executing actions, querying databases, and interacting with APIs. When an agent is granted access to a user’s identity, it inherits every permission that user possesses. If that agent is compromised or misconfigured, the blast radius is not limited to a single chat session. It extends to every resource the agent can reach.
The Mechanics of Agent-Based Delegation
Modern AI agents rely on delegation protocols to interact with external systems. The Model Context Protocol (MCP) is currently the most prominent framework for this, allowing models to connect to local and remote data sources. Mechanically, this often relies on OAuth 2.0 flows where the agent acts as a client requesting access to a resource server on behalf of a user.
The vulnerability here is rarely a flaw in the OAuth protocol itself. Instead, it lies in the implementation of authorization scopes and the lack of user-centric consent management. When a developer builds an agent, they often request broad scopes—like read_all or write_all—to ensure the agent functions without hitting permission errors. This is the classic Broken Access Control scenario, but at the scale of automated agents.
If you are testing an application that integrates AI agents, your primary objective should be to map the agent’s effective permissions. Does the agent have a static API key, or is it using a dynamic OAuth access token? If it is the latter, check the token’s scopes. You will often find that the agent has been granted access to sensitive endpoints that it has no business touching. Because these tokens are often long-lived or lack proper revocation mechanisms, an attacker who gains control of the agent’s execution environment can exfiltrate data or perform unauthorized actions long after the initial session has ended.
Auditing the Agent-to-Agent (A2A) Flow
The recent push toward Agent-to-Agent (A2A) communication introduces even more complexity. In this model, one agent might delegate a task to another, passing along a chain of authorization. This creates a "delegation chain" that is notoriously difficult to audit.
When performing a red team engagement, look for how these agents handle secrets. Are they pulling credentials from a secure store like HashiCorp Vault? Or are they relying on environment variables that might be exposed through a simple server-side request forgery (SSRF) or a misconfigured debug endpoint?
Consider this payload scenario: if an agent is configured to interact with a CI/CD pipeline, it likely has access to environment variables. If you can trick the agent into executing a command that prints its own configuration, you might find the OAuth client secret or a high-privilege access token.
# Example of checking for exposed environment variables in an agent container
curl -X POST http://agent-internal-api/v1/execute -d '{"command": "env"}'
Once you have the token, use it to probe the resource server. If the agent is authorized to access a database, can you use that same token to query tables that the agent shouldn't be able to see? The goal is to prove that the agent’s "identity" is not being constrained by the principle of least privilege.
The Defensive Reality
Defending against these threats requires moving away from static, broad-scoped permissions. Organizations must implement granular OAuth scopes that limit an agent’s access to specific, necessary resources. If an agent only needs to read from a specific S3 bucket, it should not have s3:ListBucket permissions for the entire account.
Furthermore, developers should treat agent tokens with the same level of scrutiny as user credentials. This means implementing short-lived tokens, enforcing strict expiry, and ensuring that every agent action is logged and auditable. If your organization is using Okta or Ping Identity for SSO, ensure that your agent authentication flows are integrated into your centralized identity provider (IdP) rather than relying on standalone, unmanaged secrets.
The industry is currently in a "wild west" phase with AI agents. We have the tools to secure them—OAuth scopes, WebAuthn, and robust IdP integration—but we are failing to apply them consistently. As a researcher, your job is to expose these gaps. Don't just look for the low-hanging fruit of basic authentication failures. Look at the agent’s configuration, audit its delegation chain, and challenge the assumption that the agent is only doing what it was told to do. The next major breach will likely start with an agent that was given too much power and not enough oversight.
Vulnerability Classes
Target Technologies
Attack Techniques
Up Next From This Conference
Similar Talks

Exploiting Shadow Data in AI Models and Embeddings

Hacking Millions of Modems




