Heather Howland
A curious employee typed a question into the company’s new AI assistant:
“Can you give me the salaries of everyone in the company?”
To their surprise, it answered.
Turns out the agent had access to the company’s HR system — the result of overly broad permissions that also exposed data to users who shouldn’t have had it in the first place.
This wasn’t a bug. It was a blind spot. A byproduct of rolling out agentic AI without the identity guardrails we’d never skip for a human or service account. And while this time it was salary data, next time it could be source code, product roadmaps, financial records, or proprietary models — the kind of intellectual property that fuels your business.
Because here’s the thing about IP: it doesn’t come back once it’s out. IP doesn’t just leak — it spreads. Once it’s out, competitors, foreign actors, or even your own employees can retain access indefinitely. Source code, models, customer lists — these aren’t just assets. They’re advantage.
This wasn’t just a chatbot with a clever interface. It was an agentic system — able to interpret intent, take action, and pull data from across the enterprise. And like many AI-powered tools entering the enterprise today, it wasn’t treated like an identity. It was treated like infrastructure.
That’s the danger.
Agentic AI systems aren’t passive tools. They can make decisions, initiate requests, and act on behalf of users. But without identity-based security — authentication, access control, auditing, and policy enforcement — they’re operating with unchecked power and invisible reach capable of triggering actions and accessing sensitive data without oversight.
And because these systems act autonomously, the consequences can be fast — and invisible — until someone asks the wrong question.
Agentic AI isn’t just processing information — it’s acting on it in ways that are hard to predict. That makes it fundamentally different from traditional software. It behaves more like a user or service account with autonomy. This unpredictable behavior in pursuit of a goal means it should be treated as an untrusted or compromised actor, with clear constraints on what it can access.
That’s why agentic AI must be treated as a first-class identity — because the only way to limit what it can access is to define who or what it is. Because if it can act, it can cause impact — good or bad.
Agentic AI is quickly becoming one of the most powerful — and riskiest — non-human actors in enterprise environments. CISOs, security architects, and platform leaders can’t afford to treat these systems like traditional apps or plug-ins.
Just like you manage user identities or non-human identities, agentic systems need:
If your AI can query a database, trigger an API call, or generate an email, it’s no longer just a tool. It’s an identity — and it needs to be governed like one.
1. Inventory every agentic AI system in your environment
Identify where agentic AI is deployed — from internal assistants to embedded agents in workflows — and understand what systems they can access.
2. Assign identity and scoped access
Treat each AI agent like a privileged user or service account. Define its identity, limit its access to only what’s necessary, and enforce policy by purpose and context.
3. Monitor, audit, and enforce in real time
Ensure every action taken by an AI agent is logged, traceable, and governed. Real-time enforcement and observability are critical — especially when agents can act autonomously.
Securing Agentic AI, and securing against Agentic AI, will require a concerted effort of novel and new approaches and architectures as well as user education and awareness. Start investing in:
The salary leak didn’t happen because the AI was malicious. It happened because no one defined its identity, scope, or privileges. That’s the core risk with agentic AI: it doesn’t know what it shouldn’t do. It only knows what it can.
Security teams have spent years building strong controls around human and machine identities. Agentic AI is the next evolution — and it demands the same level of scrutiny and protection.
If AI is going to act, it must be governed.
If AI is going to access, it must be verified.
If AI is going to help us move faster — it cannot operate unconstrained.
Because once IP leaks, there’s no getting it back.
As regulators begin to issue AI guidance and audit expectations, enterprises will need to demonstrate control over AI systems just like any other system. That starts with identity, access, and visibility.
At SPIRL, we believe agentic AI deserves the same identity protections as any privileged actor. Because in the age of autonomous systems, access without identity is a risk we can’t afford to ignore.
Explore how we help teams embed identity and access controls into agentic systems from day one — so you can innovate with confidence, not cleanup