Invariant Blog
Writing on AI agent reliability
Technical posts on coherence, state management, and the infrastructure layer that keeps agents honest.
Deep Dive
·
March 20, 2026
Why AI Agents Lie to Themselves — and How to Fix It
OpenClaw just crossed 100k stars. Millions of people are running autonomous agents that take real-world actions on their behalf. There's a failure mode nobody is talking about loudly enough.
Read post
arrow_forward
Technical
·
March 20, 2026
The Coherence Score: How Invariant Measures Agent World State
Under the hood of Invariant's Φ formula — how we turn a graph of claims, constraints, and dependencies into a single number that tells you whether your agent's worldview can be trusted.
Read post
arrow_forward