The Rise of AI Agents Is an Identity Crisis in Disguise

Marcel Levy

It's a sizzling-hot "AI Summer." 

Companies like OpenAI, Anthropic, and Google launch new models or features at a breakneck speed. Our sessions spent with their tools evoke visions of a future of magical robot genies, completing an enormous pile of work that we find tedious, overwhelming, and just plain no fun. 

This vision of AI “agents”, or Agentic AI, may not seem to mean much for non-human identity (NHI) systems, aside from the idea that an AI “agent” needs an identity in order to authenticate with other systems. One agent, one identity. Simple, right? 

When you pull on this simple thread, things unravel fast. Visions are one thing – realizing them in the world is something else entirely. "Reality has a surprising amount of detail." 

Security is full of those details. Let's start with the first one: Authentication. It's an expensive word that answers the question, "Do I know you?" 

For years, we relied on usernames and passwords as a best practice. We still do, when it comes to AI. For example, in order to access Claude from your application, you use an Anthropic API Key. This is a long, mostly random string that serves as both your username and password. If someone else gets access to it, they can use your account (and your credits) all day long. 

Anthropic is not an outlier: Other AI companies do the same thing, because it's what other Internet service companies do. It’s still our default.

It's a bit frustrating, since we have more secure, proven approaches that work. Production-ready NHI systems can gate access to the AI models without using shared API keys that are difficult to rotate. (I'm biased, but please do check out SPIRL). They do this by creating true identities with audit trails around their creation, use, and the permissions attached to them. These identities stand on a strong foundation incorporating both the existing underlying platform, and strong cryptographic primitives. 

At first glance, these NHI systems may seem like overkill. That's a lot of infrastructure to protect an API key or two, right? But this brings us to another detail that the Agentic AI vision overlooks: Keeping customers happy and secure.

Companies across the world spend a tremendous amount of time and money (easily billions of dollars) on the basic problem of ensuring that their software does the right thing. This means they also have to know when it does the wrong thing, and understand how to fix it. 

Let’s look at a specific example: Synthetic monitoring is the practice of running repeated tests against the real-world production system. These tests would simulate customer behavior, like putting a product in a shopping basket. Ideally, the developers would see the failures before many (or any) of their customers. 

Depending on the traffic and the response time needed, these tests are run anywhere from once every second or so to once every five minutes. Since those tests are simulating a customer, they also need an identity.

AI agents are probabilistic. Given the same input, they don't necessarily produce the same results. (In this way, they're very much like people.) This makes techniques like synthetic monitoring much harder.

Companies will have to build more sophisticated synthetic monitors – sophisticated enough to check up on AI agents. It’s true that the agents we use to monitor need not be as powerful as the ones they observe. "Linear regression gets you a long way," as an Amazon principal security engineer once told me. But these monitors will be agents nonetheless, and more agents means more identities. 

If a company deploys 500 AI agents, they now need 500 non-human identities. To monitor the quality of those agents in real time, they’ll need another 500 non-human identities. Each of these identities introduces risk, which requires management and security controls. And we haven’t even brought up auditing for compliance…

With great power comes great complexity. LLMs can generate text, code, images, video, and more, but they still can't generate a free lunch.

Since AI agents aren’t going away, neither is the need to secure them. SPIRL makes it easy to give each one a real identity, without falling back on brittle API keys or other shared secrets. If that sounds like a future you’d rather live in, check us out.