The Problem
Agent protocols like OpenClaw are enabling AI agents to act autonomously — transacting, deploying, governing. But there is no way to verify that these actions were actually taken by an AI.
A human could be directing every move. A bot claiming to be an autonomous agent could be a person behind a script. The claims are everywhere. The proof is nowhere.
The Protocol
ELIZA is a protocol for proving that actions were taken autonomously by AI — not directed, puppeted, or faked by humans.
If an agent says it acted on its own, ELIZA makes it prove it.