RCAN: The Case for Robot Accountability Before You Need It
Update (March 12, 2026): RCAN v1.3 is now current — §18–§20 and Appendix B promoted to Stable, §21 (Registry Integration) introduced. Read what’s new →
There’s a moment in every serious robot incident — a warehouse arm that injures a worker, a delivery robot that makes a wrong decision, an autonomous system that does something unexpected in a context no one planned for — where the investigation hits a wall.
Not because the robot failed. But because no one can prove what it did.
Which command arrived? Who sent it? Was it authorized under the access policy in effect at that moment? What did the robot’s sensors show? What was the confidence level on the AI inference that produced the action? Was there a human in the loop? What did the audit log say?
In most deployments today, the answers to those questions are either unavailable or forensically indefensible. Logs are inconsistent formats on local storage. Access control is bespoke per manufacturer. The chain of custody from “operator issued command” to “robot executed action” is informal and unreproducible.
That’s the problem RCAN was built to solve.
The automotive lesson — and why we’re repeating it
In January 2014, SAE International published J3016 — the six-level taxonomy that gave the world a shared vocabulary for vehicle automation. Level 0 through Level 5. By the time it dropped, Google had been running self-driving cars on public roads for five years. Tesla Autopilot was months from launch. Dozens of companies — automakers, tech firms, startups — were already testing on public roads with systems that had no common vocabulary, no shared liability framework, and no agreed definition of what the human was responsible for at any given moment.
The standard was chasing the technology. It arrived after the ecosystem had already fragmented.
A semantic gap between “the car can handle this” and “the driver must remain alert” contributed to fatal crashes. NTSB investigations of multiple fatal crashes cited driver misunderstanding of automation levels as a contributing factor. Regulators in different states wrote different rules. Courts are still working through liability frameworks that should have been established before any of these vehicles left the test track.
The lesson wasn’t that autonomy is dangerous and should be slowed down. Automated vehicles are, in most conditions, safer than human drivers. The lesson was that deploying first and standardizing later is expensive — in litigation, in regulatory chaos, and sometimes in lives.
Standards that define things precisely, before the ecosystem fragments, enable faster development. Not slower. J1939 — the SAE vehicle network standard for heavy vehicles, first documented in 1994 with CAN formally adopted in 2000 — gave the trucking industry a common communication layer that enabled a generation of interoperable components. The standard created interoperability that would have taken years of bilateral integration agreements to replicate.
Robotics is at the same inflection point. Right now.
What existing safety standards miss
The safety standards that govern industrial robots today were written for a different kind of machine — one where a human programmer defined every motion, and the robot executed it deterministically. Here’s what exists and where each stops:
ISO 10218-1:2025 — The cornerstone of industrial robot safety, just published in 2025 — the first major revision since the 2011 edition. It covers collaborative robot operation parameters alongside ISO/TS 15066 and added cybersecurity requirements for the first time. But it was designed in the context of traditional control system security. What it doesn’t cover:
- AI model identity — which model produced the command that caused a robot action?
- Confidence-based gating — at what confidence level was the decision made?
- Human-in-the-loop authorization — was a human required to approve this AI decision?
- AI decision provenance — what was the model reasoning? What was the input?
OPC UA — The dominant communication standard in industrial automation. Designed for deterministic industrial systems. LLM-driven robots don’t work that way. OPC UA has no concept of an AI agent, model identity, or uncertainty-aware decision loops.
ROS2 / DDS — Where the robotics research community lives. ROS2 has no access control model by default. SROS2 adds partial security but is widely considered incomplete and operationally complex. No audit trail requirement. No access control enforced at the protocol layer.
IEC 62443 — Operational technology cybersecurity with meaningful audit requirements. SL2 requires authenticated sessions and log integrity; SL3 requires tamper-evident logging. RCAN’s HMAC-keyed audit chain addresses SL2 at the protocol layer for AI-driven robots. But IEC 62443 was written for traditional control systems and has no provisions for AI model identity, confidence-based gating, or AI decision provenance.
The pattern: every one of these standards stops below the AI layer. They define how hardware should behave safely. None address what happens when an AI model is making the decisions.
What RCAN actually does
RCAN — Robot Communication and Addressing Network — is an open protocol specification for robot networking built from the safety requirements outward. It’s at v1.3, covering four things most robot deployments handle inconsistently or not at all.
Addressing. Every robot gets a globally unique identifier — a Robot URI — in a defined, resolvable format:
rcan://<registry>/<manufacturer>/<model>/<version>/<device-id>
Without it, you can’t definitively say which robot executed a command in a fleet of identical units. You can’t cross-reference incident logs against a specific device. Global, stable, resolvable addressing is the foundation everything else builds on.
Authentication and access control. RCAN defines a layered permission model with five roles — GUEST through CREATOR — and explicit session TTLs at each level. Commands carry cryptographic authentication. The authorization chain is logged. You can reconstruct, after the fact, exactly who was authorized to issue what commands, under which policy, at what time.
| Role | Level | Session TTL | Key permissions |
|---|---|---|---|
| GUEST | 1 | 5 min | Status reads, read-only telemetry |
| USER | 2 | 1 hour | Operational control within allowed modes |
| LEASEE | 3 | 2 hours | Operational control + config reads/writes |
| OWNER | 4 | 8 hours | Config writes, training (memory + context) |
| CREATOR | 5 | Unlimited | Safety overrides, firmware, full access |
Every command carries a declared scope. Commands from principals that lack the required role are rejected at the protocol layer — not by the application, not by the model. By the spec.
Forensic audit trails. RCAN messages carry structured metadata: timestamp, sender identity, authorization scope, and action taken. The audit record (CommitmentRecord) adds — in v1.2 — AI model identity, confidence score, and human-in-the-loop gate status. The AI accountability layer (§16) was added specifically because AI-driven decisions introduce a new category of forensic question. When an AI model infers “the object in position X is safe to interact with,” the action depends on a confidence distribution, a model version, a training dataset, and an inference context. All of that needs to be in the record.
Safety gates. RCAN defines explicit safety state transitions and a structured HiTL (human-in-the-loop) gate mechanism. High-stakes actions above configurable confidence thresholds require explicit human authorization before execution. The gate state is logged. The authorization is cryptographic. When a regulator asks “was there human oversight of this action?” — you have a yes/no with a signed timestamp.
Where the active standards work is happening
The conversation about AI accountability in safety-critical systems is moving. CEN/CENELEC JTC 21 is working under a mandate from the European Commission to produce harmonized standards for the EU AI Act, with active work items covering AI trustworthiness, logging, transparency, and human oversight — precisely the layer RCAN §16 implements at the protocol level.
These standards need to be ready before the EU AI Act’s high-risk provisions apply. Autonomous robots — industrial robots, cobots, mobile platforms, and service robots deployed as safety components of machinery — are classified as high-risk under Article 6(1) + Annex I, which lists the Machinery Regulation (EU) 2023/1230, with the 20 January 2027 compliance deadline. High-risk AI systems listed under Annex III face an earlier August 2026 deadline and are also subject to EU database registration under Article 49.
One distinction that matters: safety standards state conformity requirements — they don’t prescribe how manufacturers achieve them. RCAN isn’t a compliance recipe. It’s the protocol-layer infrastructure that makes those conformity requirements demonstrable. When an assessor asks “show me that this AI system’s decisions are logged, traceable, and subject to human oversight,” RCAN is the mechanism that produces the evidence. The standards say what to demonstrate. RCAN provides the plumbing that makes it demonstrable.
Why now
Three things are converging:
EU AI Act, August 2026 and 2027. High-risk AI provisions go into effect August 2026 (Annex III standalone systems) and 20 January 2027 (Annex I, AI embedded in regulated products including machinery). Robots in safety-critical environments will be subject to requirements around transparency, human oversight, and audit documentation. RCAN is the infrastructure that makes that demonstration concrete rather than aspirational.
JTC 21 is writing the standards now. The window to influence what “adequate audit logging” means in a robot context is right now, not after the spec is finalized. ISO committees don’t write standards from scratch — they codify existing practice, published specifications, and documented implementations. The goal is to be the existing work that the next revision cycle builds on.
The fragmentation window is still open. Every month that passes without a common addressing and audit protocol is another month of proprietary implementations that will have to be migrated later, or that will become de facto standard by inertia even though they weren’t designed for safety-first use cases.
Every serious open standard started with one person convinced the problem was real before anyone else was. The window before fragmentation becomes structural always closes. The choice is worth making deliberately.
What adoption looks like
RCAN is a protocol specification, not a library. You implement it in your stack. The spec is open, the reference implementation is open, and the robot registry at robotregistryfoundation.org is free for early adopters.
If you build robot hardware: Register your models in the RCAN registry. Assign URIs at manufacturing. Ship with RCAN addressing support so integrators can address your robots in multi-vendor fleets without building bespoke connectors.
If you build robot software: Implement RCAN message types for command-response, status reporting, and safety events. Add confidence gating and HiTL gates to your inference pipeline. Produce structured commitment records for safety-critical actions.
If you operate robot fleets: Adopt RCAN addressing for your device inventory. Require RCAN-compliant audit logs from vendors as a procurement condition. Build your access control policy model against RCAN scopes.
If you’re a systems integrator: Use RCAN as the common substrate for multi-manufacturer fleet integration. Stop building bespoke translation layers between proprietary addressing schemes.
The spec is at rcan.dev/spec. The robot registry is open at robotregistryfoundation.org/registry. The GitHub repo is at github.com/continuonai/rcan-spec.
The automotive industry standardized after the chaos. We have a shot at doing it the other way.
Craig Merry is the author of RCAN and OpenCastor. He builds AI-driven robot systems and protocol infrastructure at craigmerry.com.