June 19, 2025
Asana’s MCP Bug Wasn’t Unique — It Was a Sign of What’s Coming


On June 5, Asana disabled their experimental Model Context Protocol (MCP) server after discovering a bug with the potential to expose data from an Asana domain to other MCP users at other organizations. While the bug was quickly addressed, the incident points to a deeper problem in enterprise AI adoption: We’re giving powerful AI agents tools without guardrails, and without their own identities..
MCP is a standard that lets companies build specialized tools for AI models. Anthropic compares it to USB, except that instead of connecting to printers, it allows AI models like Claude or ChatGPT to connect to databases, Slack, or in this case, Asana.
How AI can turn a bug into a breach
Based on the writeup by Upguard, the bug sounded like another appearance by the classic confused deputy. Any time there’s an intermediary (in this case the MCP server) between a client and a server, we have the possibility of a bug that results in elevated privileges for the client.
It’s not a problem unique to AI, but the unpredictable power of AI models makes confused deputies more dangerous. This power makes other security issues more dangerous as well; As Pieter Kasselman pointed out, AI agents resemble threat actors, and require guardrails.
In this context, one guardrail would be a security check inserted between the AI model and the tools we provide it. But in order for that to work, we need to use both the identity of the user and the identity of the AI model to limit access to tools. We can’t trust every MCP server to provide the correct fine-grained access by default. Even if by some miracle we could, these servers would need verifiable identities in order to do their job. Increased AI usage requires an increased number of identities.
The bigger picture
Asana is not an outlier. If anything, Asana should be commended for their rapid and serious response. Bugs like this will happen again in other MCP servers, and preparation is key to mitigating the harm. As AI becomes more embedded in enterprise workflows, the number of non-human actors will explode, and so will the risk. The first priority is ensuring that your AI workflows and agents have non-human identities you can use to limit their actions, and observe their behavior.
Three basic steps to manage security risk when integrating AI
Start with three basic steps to manage the security risks of integrating AI into your system.
- Step 1: Ensure that your AI agents have non-human identities.
- Step 2: Enforce access control based on those identities.
- Step 3: Observe their behavior, and alert on anomalies.
Conclusion
The rise of AI agents means we’re no longer just securing users — we’re securing decisions made at machine speed, in real time, and across organizational boundaries.
Bugs like the one in Asana’s MCP server won’t be the exception — they’ll be the test of whether your architecture is ready.
Now’s the time to build with identity at the core.
Recent Blogs

October 11, 2025
Company
Introducing Defakto: The Future of Non-Human Identity Security

September 29, 2025
Identity
Authentication is not Authorization: Why treating them as the same breaks your security model

September 17, 2025
Real-World Lessons