Marcel Levy
June 19, 2025
On June 5, Asana disabled their experimental Model Context Protocol (MCP) server after discovering a bug with the potential to expose data from an Asana domain to other MCP users at other organizations. While the bug was quickly addressed, the incident points to a deeper problem in enterprise AI adoption: We’re giving powerful AI agents tools without guardrails, and without their own identities..
MCP is a standard that lets companies build specialized tools for AI models. Anthropic compares it to USB, except that instead of connecting to printers, it allows AI models like Claude or ChatGPT to connect to databases, Slack, or in this case, Asana.
Based on the writeup by Upguard, the bug sounded like another appearance by the classic confused deputy. Any time there’s an intermediary (in this case the MCP server) between a client and a server, we have the possibility of a bug that results in elevated privileges for the client.
It’s not a problem unique to AI, but the unpredictable power of AI models makes confused deputies more dangerous. This power makes other security issues more dangerous as well; As Pieter Kasselman pointed out, AI agents resemble threat actors, and require guardrails.
In this context, one guardrail would be a security check inserted between the AI model and the tools we provide it. But in order for that to work, we need to use both the identity of the user and the identity of the AI model to limit access to tools. We can’t trust every MCP server to provide the correct fine-grained access by default. Even if by some miracle we could, these servers would need verifiable identities in order to do their job. Increased AI usage requires an increased number of identities.
Asana is not an outlier. If anything, Asana should be commended for their rapid and serious response. Bugs like this will happen again in other MCP servers, and preparation is key to mitigating the harm. As AI becomes more embedded in enterprise workflows, the number of non-human actors will explode, and so will the risk. The first priority is ensuring that your AI workflows and agents have non-human identities you can use to limit their actions, and observe their behavior.
Start with three basic steps to manage the security risks of integrating AI into your system.
The rise of AI agents means we’re no longer just securing users — we’re securing decisions made at machine speed, in real time, and across organizational boundaries.
Bugs like the one in Asana’s MCP server won’t be the exception — they’ll be the test of whether your architecture is ready.
Now’s the time to build with identity at the core.