The Human Commit protocol
A simple rule for safe AI: propose changes, but require explicit human approval before execution.
The conversation around AI safety often focuses on alignment. But even well aligned systems can make mistakes. The question is not just will they do harm, it is can they.
At Senza, we believe the answer should be no. Not without explicit permission.
The problem with autonomous action
When you give an agent access to your calendar, email, or CRM, you're handing over powerful capabilities. Most frameworks treat this as a binary: either the agent can act, or it can't.
But real organizations need nuance. An agent should be able to propose moving a meeting, but not execute it until a human approves.
Human Commit
We call this the Human Commit protocol. Inspired by version control systems, every state changing action requires a commit, an explicit, logged authorization from a responsible party.
The agent does the legwork: gathering context, synthesizing options, preparing the action. But the final yes comes from you.
This is not about distrust. It is about accountability. People deserve to know who approved what, and when. The Human Commit protocol provides that audit trail by default.
If you want the technical angle, read The backbone of reliable AI: deterministic ops.