If you’re experimenting with AI agents, you’ve probably run into this problem: once an agent starts calling tools, APIs, models, email systems, databases, or jobs, it can become hard to control what happens next.
Permissions answer: “Can this agent use this tool at all?”
Rate limits answer: “How fast can it call it?”
But agents fail in a different way. They retry, loop, fan out, call expensive models, send too many emails, trigger jobs or keep acting after the run has already gone off track.
I built Cycles to tackle this problem.
It’s an open-source runtime authority layer for AI agents. Before an agent takes a costly or risky action, Cycles checks whether that action is still within the allowed budget or policy. If yes, it reserves the allowance, the action runs, and then the agent commits what actually happened. If not, the action is blocked before execution.
The goal is to make agent execution safer under:
- runaway retry loops
- unexpected model/API spend
- multi-step agent workflows
- concurrent agents sharing the same budget
- per-user / per-tenant limits
- risky actions like emails, DB writes, API calls, or job triggers
This is not meant to replace observability or tracing. Those are still useful. The gap is the moment before execution, not after the bill or side effect already happened.
Repo: https://github.com/runcycles
Curious how others here are handling this today:
Do you gate agent actions before execution?
Do you rely mostly on logs / alerts after the fact?
Would a reserve → execute → commit model be useful in real agent systems, or does it feel like too much infrastructure too early?
[link] [comments]