AI’s Security Problem Isn’t AI — It’s Everything Around It

Pieter Kasselman

Agentic AI is surfacing hidden risks across digital estates everywhere, not by introducing new vulnerabilities, but by shining a spotlight on architectural debt that has accumulated over decades. If left unchecked, it can result in IP leakage, insider threat amplification, and compliance violations.

Why Existing Environments Are so Vulnerable

Despite the progress the industry is making towards least privilege access, retrofitting a digital estate that has been decades in the making is a time consuming process. Sometimes there are dark corners in that digital estate that haven't been visited for many years, yet it performs critical functions and has access to sensitive data. 

These environments were built to have “good enough” security where the threat model was mostly concerned with human actors. Most enterprises trust their employees and the controls were just enough to keep honest users honest. In this world, the human actors often did not know what they could access. If they knew what they could access, they often lacked the knowledge to do so, and if they had the knowledge, they were constrained by the ethical and legal consequences of using that knowledge. When a human actor outside the enterprise did exploit the coarse grained access, it was called an attack or a breach. If a trusted human actor inside the enterprise did the same, it was called an accident or an insider threat with consequences for the individuals involved. Regardless, exercising all the privileges an employee could obtain in an enterprise was considered an anomaly and treated as such. 

AI, and agentic AI in particular, is changing that. AI agents are very good at figuring out what information is available, is good at figuring out how to access it, and is not constrained by the same ethical and legal considerations human actors take into account when making a decision whether they should access a resource or not. In that sense AI agents resemble threat actors more than they resemble trusted employees. It is a small consolation that they are not deliberately malicious, but the consequences are equally devastating. For example, a company may deploy an LLM to help employees be more productive and find information more quickly, but because of the course-grained access found in many enterprises, it is not predictable about what it can access, or how it may use that information. All it takes is one unprotected database or document repository with sensitive HR, financial or customer information, and the LLM will happily answer questions like “Give me a list of all the salaries in my company” (you can fill in your own nightmare question here). AI agents will exercise all the privileges available to them, and they will do so by default without concerns for the consequences. As a consequence, we need to treat any AI agents deployed in an environment as untrusted by default. 

A New Set of Guardrails Is Needed

So what should be done if you want to deploy AI securely? The first thing is to decide on your strategy for both securing the new Agentic AIs that are getting deployed. The second thing to decide is how you will protect your existing environment against the newly deployed Agentic AI to limit its broad access. Both of these are rooted in identity.

An Identity for every AI and every Workload

The first, and easiest, thing to do is to make sure that any new system, including the new AI systems, is built with least privilege in mind. Every AI agent, application and workload MUST (yes, that is an IETF RFC2119 MUST), have its own unique identity and credentials with fine-grained authorization, monitoring, audit trails and governance. These are just table stakes, nothing fancy. The good news is the technology exists and the products are available. This is necessary, but as discussed in this blog post, not nearly sufficient. 

Protect your environment against AI

This brings us to the hardest part of deploying AI securely, namely securing the existing environment. 

Ideally existing environments can adopt a least-privilege or “zero trust” architecture that starts with an identity for every application or workload, fine-grained authorization and governance. Many companies already have programs underway to do this, but it is by no means universal. Even where programs are in place, it will take time to deploy. Once in place, it will protect not only against Agentic AI but also more traditional threats like external threat actors, accidents or malicious insiders. However, this is not practical for everyone. 

Another option is to deploy AI gateways that act as points of control. These AI gateways act as policy enforcement points and control what resources an AI agent can access, based on their identities. It inspects inbound and outbound traffic to filter content and detect anomalous access patterns, and it logs all access to create audit trails that are used in the governance process. Such gateway proxies can even be deployed to create segmentation not just between Agentic AI and brownfield systems, but even between brownfield systems  systems to harden them.In some ways, this looks very much like firewall or API gateways that many security practitioners are already familiar with.

In practice, both infrastructure modernization and AI gateways must proceed in parallel. Programs to modernise existing infrastructure take longer but the long-term benefits accumulate broadly, while using AI gateways to isolate AI agents from the rest of the environment will speed up deployment and prevent AI deployments from getting gated behind large multi-year security projects. Regardless of the approach taken, the fundamentals remain the same:

  1. Start with Identity: Every AI agent, application and workload must have a unique identity and credentials.
  2. Limit Access: Deploy policy based authorization as part of a least-privilege access strategy, whether enforced by an AI gateway or throughout your environment.
  3. Govern: Deploy tools to monitor access and detect deviations from policy – and investigate those anomalies when they occur.

AI agents are not the problem, they are just showing us the extent of the problem.

Existing environments accumulated technical debt in the form of coarse-grained access over decades. Now AI agents are showing us the extent to which these environments are over-permissioned and under-governed. Where possible, the existing environment needs to be hardened and new environments should be built with least privilege in mind. This takes time, and while these programs roll out, deploying AI gateways, similar to API gateways and firewalls, helps isolate AI agents from existing environments with course grained access. Regardless of your approach, (non-human) identity is at the core of the solution.