Hacker News new | past | comments | ask | show | jobs | submit

Show HN: OneCLI – Vault for AI Agents in Rust

https://github.com/onecli/onecli
This problem+solution, like many others in the agentic-space, have nothing agent-specific. Giving a "box" API keys was always considered a risk, and auth-proxying has existed as a solution forever. See tokenizer[0] by the fly.io team, which makes it a stateless service for eg - no database or dashboard. Or the buzzfeed SSO proxy, which lets you do the same via an OAuth2-dance at the frontend, and a upstream config at the backend which injects secrets: https://github.com/buzzfeed/sso/blob/549155a64d6c5f8916ed909....

[0]: https://github.com/superfly/tokenizer

loading story #47354417
This can also be done using existing Vaults or Secrets manager. Hashicorp Vault can do this and agents can be instructed to get secrets, which are set without the agent's knowledge. I use these 2 simple scripts with OpenClaw to achieve this, along with time-scoped expiration. The call to vault_get.sh is inside the agent's skill script so that the secrets are not leaked to LLMs or in any trace logs:

vault_get.sh: https://gist.github.com/sathish316/1ca3fe1b124577d1354ee254a...

vault_set.sh: https://gist.github.com/sathish316/1f4e6549a8f85ac5c5ac8a088...

Blog about the full setup for OpenClaw: https://x.com/sathish316/status/2019496552419717390

loading story #47354269
This is the right approach. I built a similar system to https://github.com/airutorg/airut - couple of learnings to share:

1) Not all systems respect HTTP_PROXY. Node in particular is very uncooperative in this regard.

2) AWS access keys can’t be handled by simple credential swap; the requests need to be resigned with the real keys. Replicating the SigV4 and SigV4A exactly was bit of a pain.

3) To be secure, this system needs to run outside of the execution sandbox so that the agent can’t just read the keys from the proxy process.

For Airut I settled on a transparent (mitm)proxy, running in a separate container, and injecting proxy cert to the cert store in the container where the agent runs. This solved 1 and 3.

loading story #47354640
loading story #47354778
loading story #47355136
I don't get the benefit. Yes, agents should not have access to API keys because they can easily be fooled into giving up those API keys. But what's to prevent a malicious agent from re-using the honest agent's fake API key that it exfiltrates via prompt injection? The gateway can't tell that the request is coming from the malicious agent. If the honest agent can read its own proxy authorization token, it can give that up as well.

It seems the only sound solution is to have a sidecar attached to the agent and have the sidecar authenticate with the gateway using mTLS. The sidecar manages its own TLS key - the agent never has access to it.

loading story #47357868
loading story #47356823
IronClaw seems to do this natively, I like the idea in general, so it's good too see this pulled out.

I have few questions:

- How can a proxy inject stuff if it's TLS encrypted? (same for IronClaw and others)

- Any adapters for existing secret stores? like maybe my fake credential can be a 1Password entry path (like 1Password:vault-name/entry/field and it would pull from 1P instead of having to have yet another place for me to store secrets?

loading story #47354798
Secret and credential sprawl is a real problem in agent pipelines specifically -- each agent needs its own scoped access and the blast radius of a leaked credential is much larger when an agent can act autonomously. We ended up with a tiered secret model: agents get short-lived derived tokens scoped to exactly the tools they need for a given task, not broad API keys. Revocation on task completion, not on schedule. More ops overhead upfront but caught two misuse cases that would have been invisible otherwise.
Nice. The proxy-intercept approach is the right architecture. Agent gets a placeholder, the real credential never touches agent memory. Rust is a solid choice for something this sensitive.

The gap that gets teams eventually: this works great on one machine, but breaks at the team boundary. CI pipelines have no localhost. Multiple devs sharing agents need access control and audit trails, not just a local swap. A rogue sub-agent with the placeholder can still do damage if the proxy has no per-agent scoping.

We ran into the same thing building this out for OpenClaw setups. Ended up going vault-backed with group-based access control and HMAC-signed calls per request. Full breakdown on the production version: https://www.apistronghold.com/blog/phantom-token-pattern-pro...

loading story #47356195
Oops, i read vault and thought obsidian vault haha - but yeah, one of the issues is if your agent can _execute_ on the secret at all, it can be potentially convinced to use it in a way that does not benefit you, even if it doesn't have access to the secret itself.
You don't want to give the agent a raw key, so you give it a dummy one which will automatically be converted into the real key in the proxy.

So how does that help exactly? The agent can still do exactly what it could have done if it had the real key.

loading story #47357558
For one thing, it cannot leak secrets between services.
loading story #47355070
The fake key for real key thing seems like a problem. A lot of enterprise scanning tools look for keys in repos and other locations and you will get a lot of false positives.

Otherwise this is cool, we need more competition here.

loading story #47353932
Does it act like an auth proxy?
This is slick but the only thing it prevents is agents from directly sharing the credentials through git or something.

But that’s not the biggest risk of giving credentials to agents. If they can still make arbitrary API calls, they can still cost money or cause security problems or delete production.

If you’re worried about creds leakage only because your credentials are static and permanent, well, time to upgrade your secrets architecture.

wuweiaxin's short-lived-tokens-per-task is the right model, but it hits a ceiling: AWS IAM makes it native; most SaaS APIs don't. GitHub hands you a bearer token, Stripe too, Notion too. The proxy fills that gap. You get per-request scope enforcement without depending on the upstream service supporting fine-grained auth. That's the actual answer to paxys's question -- it's not about preventing the agent from making the same calls, it's about enforcing which calls it's allowed to make when the credential issuer won't.
Don't see any reason to use this over vault.
tl;dr "scrt [set|get|list|....]" is also a great option

---

If this is of interest, I also recommend looking into: https://github.com/loderunner/scrt.

To me, it's a compliment to 1password.

I use it to save every new secret/api key I get via the CLI.

It's intentionally very feature limited.

Haven't tried it with agents, but wouldn't be surprised if the CLI (as is) would be enough.

Why not just use AWS Secrets Manager?
loading story #47354636