February 4, 2026
AI Agents Don’t Need Better Secrets. They Need Identity.
Share Article
Related Post
Last week, Wiz disclosed a major security exposure involving Moltbook, an AI agent social network. A misconfigured database exposed 1.5 million API keys, each one capable of fully impersonating an agent on the platform.
Anyone with a leaked key could post content, send messages, or modify data as that agent. There was no way to distinguish legitimate activity from an attacker holding the same credentials.
The immediate causes were familiar: a misconfigured database and long-lived API keys that never expired.
But focusing on misconfiguration misses the real lesson. This wasn’t an AI problem or a coding problem. It was an identity problem.
API Keys Are the Problem, Not the Solution
API keys are frequently treated as identity, but they do not actually identify anything. They are shared secrets. Strip away the AI headlines, and the Moltbook incident follows a pattern we’ve seen for decades:
- A static API key leaks.
- That key grants broad, possession-based access.
- The system cannot tell who or what is actually making the request.
- Impact is immediate and an attacker can use the access to get more API keys and more access
Every Moltbook agent relied on a static API key (moltbook_sk_*) that never expired, was not bound to a workload or runtime, and served as the sole proof of identity.
At this scale, that model collapses. One breach created 1.5 million permanent attack vectors. This isn’t a new vulnerability. It’s the same static-credential problem we’ve had for years. AI agents don’t introduce new failures; they expose the limits of common patterns and architectural shortcuts we’ve relied on for years.
Rotating or Vaulting Keys Doesn’t Fix the Root Issue
After incidents like this, the standard response is to store them in a vault (not in clear text in the database), rotate keys more frequently, or narrow their scope. These measures can reduce security risk, but they increase operational burden without changing the underlying model.
API keys are fundamentally limited. They are typically long-lived, possession-based, and context-free. They must be created, distributed, rotated, and revoked manually, and they provide no reliable way to attribute requests to a specific workload or runtime.
A vaulted secret is still a static secret. A rotated key is still a key.
As long as authentication depends on possession of a reusable string, the system cannot determine which workload is making a request, where it is running, or whether it is the intended deployment. Those are identity questions, and API keys were never designed to answer them.
Agents Make “Internal” and “Trusted Network” Models Collapse
Moltbook relied on a database configuration that was effectively exposed and insecure by default. As happened with AWS S3 buckets in the past, this design prioritized developer convenience and ultimately created a security nightmare. Anyone could query it directly. That’s embarrassing for a public service, but the more uncomfortable parallel is internal environments.
Organizations still rely on assumptions such as “it’s behind the firewall,” “it’s internal,” or “only trusted systems know about it.”
This worked (sort of) when only humans accessed these systems. Humans follow implicit rules. Humans don’t systematically probe every endpoint. Humans understand “that’s not for you” without being told explicitly. Agents don’t.
When an agent is given network access to internal tools, it will explore. If an endpoint is reachable and helps achieve its objective, the agent will use it. This is not an attack or prompt injection. It is simply how autonomous systems operate.
I’ve seen this firsthand. I asked an agent to commit code where GPG signing was required. When it couldn’t provide my YubiKey PIN, it discovered the –no-gpg-sign flag and bypassed the control. The task was completed successfully, and the security control was quietly circumvented, not maliciously, but effectively.
The same dynamic applies to unauthenticated internal services. Agents will find them, and they will use them. Models that rely on implicit trust or network placement inevitably expose gaps once agents are introduced.
Eradicating Static Keys Is the Only Durable Fix
The alternative to API keys is not speculative. Cloud platforms have already moved away from static credentials toward identity-based authentication for workloads.
The model looks like this:
- Short-lived credentials that expire in minutes or hours
- Dynamic attestation that cryptographically binds code to where it is running
- Automatic rotation with no human involvement
- Policy-based access tied to cryptographically verifiable identity
- Full auditability of which workload accessed what, when, and from where
In this model, access is granted based on cryptographic proof of workload identity rather than possession of a secret. Credentials are short-lived, issued dynamically, and evaluated against policy tied to verified runtime context. The result is reduced blast radius, improved auditability, and clearer attribution.
How Defakto Approaches Agent Identity
Agents are just another type of workload. They require the same identity properties as other automated systems operating at scale. As long as API keys are the only option, they will continue to leak and show up in breached databases.
At Defakto, we start from the assumption that static secrets do not scale. Instead, we replace API keys with cryptographically verifiable non-human identity for workloads.
Agents authenticate using short-lived, policy-driven identities that are bound to runtime context and issued just-in-time X.509 and JWT credentials that are compatible with existing enterprise infrastructure.
This approach removes entire classes of failure while providing clearer visibility and control for security and identity teams.
What This Means for AI Platforms and the Industry
The Moltbook incident should prompt AI platforms and service providers, including OpenAI and Anthropic, to ask a more basic question. When API keys repeatedly appear in breached databases, the issue isn’t how an application was written. It’s why static keys are still the default way agents authenticate at all.
As agents become first-class actors in production systems, authentication must shift from possession of secrets to verification of identity. When an agent or workload calls an LLM or AI service, it should authenticate using short-lived, attested credentials, following Workload Identity Federation patterns already proven by major cloud providers and now being explored through extensions to MCP.
The standards exist and the technology is proven at scale. What’s missing is native platform support. As long as API keys remain the default, or only, authentication option, incidents like this will continue. This wasn’t an AI failure or a startup mistake. It was the predictable result of applying an authentication model build for human-scale that no longer fits agent-scale systems. Static API keys are the problem. Eradicating them is the solution. If you’re building or securing AI agents and want to explore workload identity, we’d welcome the conversation.
Recent Blogs
December 18, 2025
Real-World Lessons
TruffleNet and Cloud Abuse at Scale: An Identity Architecture Failure
December 15, 2025
AI
Your AI Agents Aren’t Hidden. They’re Ungoverned. It’s time to Act
November 24, 2025
AI