Infisical Launches Agent Vault for Secure AI Agent Credentials
Infisical released Agent Vault, an open-source HTTP credential proxy designed to secure credential management and access patterns for AI agents.
April 28, 2026
TL;DR
Infisical has released Agent Vault, an open-source HTTP credential proxy and vault built specifically for AI agents. It sits between your agent and the APIs it calls, injecting credentials at request time so agents never hold secrets directly. The architecture is sound, but the failure modes when you configure it wrong are severe enough that you should test it against your actual agent traffic before shipping anything to production.
The last time credential handling changed this much
The agent credential problem is not new. It is a rerun of what happened with server-side web applications in the early 2000s, when database passwords lived in PHP files, checked into version control, served alongside the code that used them. The solution that emerged over the next decade was not clever. It was boring: environment variables, then secret management services, then vault-style systems like HashiCorp Vault and AWS Secrets Manager. The pattern that worked was always the same. Remove the secret from the thing that uses it. Put it somewhere with access controls. Inject it at runtime. What is different now is that agents are not servers. A server has a fixed identity, a known network location, a deployment pipeline you control. An agent has a session, a context window, and often a tool-calling interface that was designed for capability, not security. The credential gets passed to the agent so it can call an API. The agent puts it somewhere. Where? Usually in memory, sometimes in logs, occasionally in a response it sends back to a user who asked the right question. The history here suggests the right intervention is architectural, not instructional. Telling agents not to leak secrets is like telling developers not to commit secrets. It works until it does not. The system has to make the leak structurally harder.What Agent Vault is actually doing
Agent Vault positions itself as an HTTP proxy. The agent does not hold credentials. Instead, it makes requests to the vault proxy, which injects the appropriate credentials into the outbound request before forwarding it to the target API. The agent never sees the secret. It sees a proxied endpoint. This is a meaningful design choice. The alternative approach, which many teams use today, is to pull secrets from a vault at agent startup and hand them to the agent as environment variables or tool configuration. That approach has a window: the credential is resident in the agent's runtime, and anything that can read the agent's state can read the credential. The proxy model closes that window by keeping the secret outside the agent's process entirely. Infisical already runs a secrets management platform, so Agent Vault is building on existing infrastructure for the actual storage and access control layer. The agent-specific part is the HTTP proxy and the patterns for mapping agent requests to credentials without exposing those credentials in the request chain. For teams already using Claude Code or building with tools like n8n or Gumloop, the practical integration question is whether your agent can be configured to route API calls through an HTTP proxy. Most tool-calling frameworks can. The harder question is whether your agent is making calls through a consistent enough interface that you can actually intercept them, or whether credentials are flowing through a dozen different paths.How the proxy model differs from environment variable injection
Environment variable injection: secret leaves the vault at startup, lives in agent runtime for the session duration. Proxy model: secret never leaves the vault, injected per-request at the network layer. A compromised agent process exposes the session's secrets in the first model. In the proxy model, a compromised agent process can make authenticated requests during its session, but cannot extract the underlying credential.
Where this breaks in practice
The proxy architecture sounds clean. In practice, three things tend to go wrong. The first is coverage. Teams configure Agent Vault for the API calls they know about and forget the ones they do not. An agent using a code execution tool, a web browsing tool, and a database connector has three different credential paths. If you only route the database connector through the proxy, you have not actually centralized your credential management. You have added a proxy for one path and left the others untouched, which gives you operational overhead without the security improvement. The second is that proxy latency compounds. A single proxied request adds single-digit milliseconds. An agent making 40 tool calls in a session, each proxied, starts to feel different from the direct-call version. This matters less for batch workflows and more for anything that needs to feel interactive. Test your actual request volume before assuming the overhead is negligible. The third, and most dangerous, is misconfiguration during the credential mapping setup. The proxy has to know which credential to inject for which request. If you misconfigure that mapping, you get one of two bad outcomes: the request fails with an auth error, which is annoying but visible, or the request succeeds with the wrong credential, which is worse in ways that are hard to detect. A test that sends real requests through the proxy against a staging environment, with assertion on which credential was actually used, is not optional. There is also the audit question. One of the arguments for this kind of proxy is that you get a log of every credentialed request an agent makes. That is particularly useful for debugging and compliance. But that log is now a target. If you are storing the full request and response through the proxy, you need to think carefully about what the response bodies contain. Credentials do not have to appear in requests to cause problems. They can come back in responses.3
distinct failure modes to test before shipping agent credential proxies to production
One thing to audit before you deploy this
Before you configure Agent Vault or any credential proxy for your agent stack, map every outbound HTTP call your agent makes during a typical session. Not the calls you designed. The calls that actually happen. Run your agent against a traffic-capturing proxy like mitmproxy or Charles and log every request for a 30-minute session. You will almost certainly find calls you did not know were being made, to endpoints you did not plan to secure. That list is your configuration checklist for Agent Vault. Any uncovered endpoint is a gap. The proxy only helps if it is in the path for every credentialed call, not just the ones you remembered when you were writing the setup docs. If you are comparing approaches to agent security more broadly, the cursor-vs-claude comparison touches on how different agent environments handle tool access, which is adjacent to the credential question. And if your agents are running in an orchestration layer, Goose has its own approach to tool credentialing that is worth reading against Agent Vault's model before you commit to either.Comments
Some links in this article are affiliate links. Learn more.