← All posts

March 9, 2026

Who Approves the Agent?

AI agents that take action need permission. In a team thread, that permission model changes completely.


In a private AI chat, approval is simple. The agent says "I'll refactor this function" — you say yes. Done. One person, one decision.

Now put that same agent in a team thread. It says "I'll open a PR to fix the deploy config." Who approves? The person who reported the bug? The engineer who owns that service? The tech lead? Everyone?

When the whole team can see the agent's actions, the question isn't just "should the agent do this?" — it's "who gets to say yes?"

The Approval Problem

Private AI tools don't have this problem because there's no audience. You and the AI have an implicit contract: you asked, it acts. But in a shared space, unilateral action feels wrong — even if the action is correct.

Imagine the agent opens a PR that refactors auth middleware. The person who asked for it thinks it's great. The engineer who wrote the original code had no idea it was happening. That's not collaboration — that's a surprise.

Intent Before Action

The fix isn't complex approval chains. It's making the agent's intent visible before it acts:

The agent proposes, then pauses. Instead of silently opening a PR, the agent says: "I can fix this by updating deploy.yml — here's what I'd change. Want me to open the PR?" The plan is visible. Anyone in the thread can react — approve, object, or refine.

Context determines who matters. The agent doesn't need sign-off from everyone. But it can recognize scope. A typo fix? Anyone can approve. A schema migration? The agent should wait for the person who owns the database. This isn't hard-coded — it comes from the team's natural structure surfaced in the conversation.

Silence is consent, within bounds. For low-risk actions the team has seen before, the agent can act after a reasonable pause if nobody objects. For high-risk actions — deleting data, modifying production config, anything irreversible — explicit approval is non-negotiable. The autonomy dial controls where that line sits.

Visible Action, Shared Ownership

When an agent takes action in a team thread, something important happens: the action has provenance. Everyone saw the request, the plan, the approval, and the result. Nobody wonders "who told the AI to do that?" — the thread tells the whole story.

This is the opposite of how most AI tools work today. In private chat, AI actions are invisible to the team. You become the messenger — reporting what the AI did after the fact, without the reasoning that led there.

In a shared thread, the action and its context are inseparable.


The safest AI isn't the one with the most restrictions. It's the one whose work happens where the team can see it.