Mind vs Machine: The Role of Human Psychology and AI in Security Culture
This talk explores the intersection of human psychology and AI in security, highlighting how human cognitive biases and AI-driven automation introduce new attack vectors. It examines how threat actors, both human and AI-based, exploit human tendencies like politeness, urgency, and cognitive overload to bypass security controls. The presentation provides strategic recommendations for security architects to design systems that account for these behaviors, such as implementing just-in-time access, automated guardrails, and adversarial training. It emphasizes that security must be a deliberate, conscious design choice rather than an accidental byproduct.
Why Your AI-Driven Security Controls Are Failing the Human Test
TLDR: Security controls often fail not because of technical vulnerabilities, but because they ignore human psychology and the unpredictable nature of AI agents. Threat actors are increasingly exploiting human politeness, urgency, and cognitive overload to bypass traditional defenses. To secure modern environments, architects must move beyond static policies and implement context-aware guardrails, adversarial training, and automated verification loops.
Security teams spend thousands of hours hardening infrastructure against external threats, yet we consistently leave the front door unlocked by ignoring the human element. The recent rise of generative AI has only accelerated this problem, creating a landscape where attackers use both human social engineering and automated AI agents to exploit our systems. This isn't just about phishing emails anymore. It is about understanding that security is a design choice, not a static configuration.
The Psychology of the Bypass
Human beings are the most complex component in any security architecture. We are hardwired for social cooperation, which makes us inherently vulnerable to manipulation. When a security control—like a multi-factor authentication prompt or a strict access request workflow—creates friction, the human instinct is to find the path of least resistance.
Attackers know this. They don't need to find a zero-day exploit if they can convince a developer to disable a security header or share a session token in a moment of urgency. This behavior is not driven by malice; it is driven by the need to get work done. If your security controls are slow, opaque, or overly complex, your users will find a way around them.
As a pentester, you have likely seen this in every engagement. You don't need to break the encryption if you can convince an admin to "temporarily" grant you elevated privileges to debug a production issue. The vulnerability here is the human need for autonomy and the desire to avoid bureaucratic red tape.
The Rise of Non-Human Threat Actors
We are currently witnessing an explosive growth of non-human identities in our environments. These are the scripts, cron jobs, and AI agents that handle our automation. According to recent research, these non-human identities now vastly outnumber human users in many enterprise environments.
The problem is that we treat these identities like static service accounts. We assign them broad permissions, hardcode API keys, and rarely rotate credentials. When an attacker compromises one of these identities, they gain a foothold that is often invisible to traditional monitoring tools.
Consider the OWASP Identification and Authentication Failures category. While we focus on human password hygiene, we are failing to apply the same rigor to our machine-to-machine communication. If your CI/CD pipeline uses a long-lived token to push code to production, you have already lost.
AI as an Adversary and an Agent
Generative AI has changed the game for both sides. On the offensive side, we are seeing adversarial machine learning techniques being used to bypass security filters. Attackers are using tools like Stockfish to model decision-making processes and find loopholes in logic that a human would never spot.
More concerning is the ability of AI to hallucinate facts and generate convincing, yet entirely false, information. We have already seen legal professionals sanctioned for submitting court filings generated by AI that included fake case citations. In a security context, this means an AI agent could provide a developer with a "secure" code snippet that actually contains a backdoor or a vulnerable dependency.
If you are building AI-driven security tools, you must implement parallel decision-making systems. Never trust the output of a single model. Use a secondary, independent system to verify the output of the first. If the two systems disagree, the action should be blocked and flagged for human review.
Designing for the Human and the Machine
Defending against these threats requires a shift in mindset. We need to stop treating security as a wall and start treating it as a path.
For human users, this means implementing just-in-time access models that remove the need for standing privileges. When a user needs access, they should be able to request it through a streamlined, automated workflow that provides the necessary context and logs the activity. By reducing the friction, you reduce the incentive for users to bypass the system.
For non-human identities, we must enforce the principle of least privilege. Every script and agent should have a scoped role that is rotated frequently. Use tools like HashiCorp Vault to manage secrets dynamically rather than relying on static environment variables.
Finally, we must embrace adversarial training. You should be actively testing your systems by feeding them malicious, confusing, and contradictory inputs. If your AI agent can be tricked into revealing sensitive data or performing unauthorized actions, you need to know about it before an attacker does.
Security is not a product you buy; it is a culture you build. It requires a deep understanding of how humans and machines interact, and a willingness to design systems that account for the reality of human behavior. Stop trying to build a perfect, impenetrable box. Start building systems that are resilient, observable, and designed to fail safely.
Vulnerability Classes
Target Technologies
OWASP Categories
Up Next From This Conference
Similar Talks

Unmasking the Snitch Puck: The Creepy IoT Surveillance Tech in the School Bathroom

Counter Deception: Defending Yourself in a World Full of Lies




