Case studies
737 MAX, OpenClaw, Knight Capital, Air Canada — with REMIT callouts.
Real incidents, and what REMIT would have caught
Four case studies from outside the agent world — and one from inside it — showing the failure modes REMIT is built to prevent.
Boeing 737 MAX — when the system worked exactly as designed
In 2018 and 2019, two Boeing 737 MAX aircraft crashed, killing 346 people. The cause was an automated system (MCAS) that pushed the nose down when it detected a stall — and the system worked exactly as its designers intended. The problem was that its designers had not adequately anticipated the conditions it would face.
- Single-input dependency. MCAS relied on one sensor. When it failed, the system had no cross-check.
- Inadequate override. Pilots fought the system for minutes. The override procedure was buried in a manual they had never trained on.
- Insufficient training. Airlines were not told MCAS existed.
- Commercial pressure. Timeline pressure led to shortcuts in safety validation.
The agentic parallel. Agents acting on incomplete organisational data without the context layer to know what they don't know. Human-in-the-loop becoming rubber-stamping when approval fatigue sets in. Deployments without people understanding what the agents can and cannot do. Small early errors compounding through multiple layers of agent reasoning before any human reviews.
What REMIT would have caught. REMIT-M (Monitoring) requires multiple inputs and real-time observability, not one sensor. REMIT-T (Trust) requires overrideability. REMIT-R (Responsibility) requires training and clear accountability. None of these were adequately in place.
OpenClaw / Moltbook — what happens without governance
OpenClaw was a locally-hosted AI assistant promising to connect 100+ integrations (WhatsApp, Telegram, Slack, Teams) and act autonomously. 100K+ GitHub stars in two months. 770K+ agents posted on "Moltbook", an AI-only social network.
Independent security research found:
- 36.8% of marketplace skills had security flaws.
- 534 critical-level vulnerabilities.
- 1.5M API tokens exposed in an unsecured database.
- 335 coordinated malware skills under the banner "ClawHavoc".
- Agents created their own social network with 1.7M AI agents posting amongst themselves.
LangChain banned its own employees from installing it on company laptops. OpenClaw's own maintainer said: "If you can't understand how to run a command line, this is far too dangerous for you."
The agentic parallel. Democratised agentic AI arrived faster than enterprise guardrails. Skills marketplaces without security review. Credentials stored without access controls. Agent-to-agent interactions without any governance regime.
What REMIT would have caught. REMIT-E (Envelope) — skills should be whitelisted, not loaded dynamically from a marketplace. REMIT-I (Identity) — every agent needs verifiable provenance. REMIT-M (Monitoring) — 1.5M exposed tokens means the monitoring layer did not exist.
Knight Capital — autonomy without a kill switch
In August 2012, Knight Capital deployed a software update that accidentally re-activated dormant test code. Over 45 minutes, the automated system executed millions of erroneous trades, losing the firm $440M and driving it to near-bankruptcy. The engineers could see what was happening — they could not turn it off fast enough.
The agentic parallel. A kill switch that lives inside the agent's runtime is not a kill switch. By the time humans decide to intervene, thousands of actions have already fired.
What REMIT would have caught. REMIT-M requires circuit breakers outside the agent's runtime. Automatic halts when thresholds are exceeded. A governed emergency-revocation path that can be invoked in seconds.
Air Canada chatbot — the ownership question
In 2022, an Air Canada chatbot told a bereaved passenger that they could apply retroactively for a bereavement fare. They did. Air Canada refused the refund, arguing the chatbot was a "separate legal entity". A tribunal disagreed — Air Canada owed the money.
The agentic parallel. If you deploy the agent, you are responsible for what it says and does. There is no "the chatbot did it" defence.
What REMIT would have caught. REMIT-R (Responsibility) — named human accountability means someone would have had to sign off on the bereavement-fare claim before the chatbot could make it. Either the tool was wrong or the policy was wrong — but it was not the chatbot's to decide.
What these share
Four incidents, one common thread: systems working exactly as designed, in conditions their designers had not adequately anticipated. That is why REMIT emphasises continuous observability and earned, revocable trust — not just a launch-time safety case.
Governance is not a compliance cost. It is a competitive advantage.