The Safety Standards Landscape for AI Robots
Update (March 12, 2026): RCAN v1.3 is now current — §18–§20 and Appendix B promoted to Stable, §21 (Registry Integration) introduced. Registry moved to robotregistryfoundation.org. Read what’s new →
The safety standards that govern industrial robots today were written for a different kind of machine — one where a human programmer defined every motion, and the robot executed it deterministically. The gap between that assumption and what AI-driven robots actually do is where a lot of risk is accumulating quietly.
Here’s a map of what exists, what each standard covers, and where the AI accountability layer fits — or doesn’t.
ISO 10218-1:2025
The cornerstone of industrial robot safety. Just revised in 2025 after eight years of work — the first major update since 2011. It’s a big deal: it absorbed ISO/TS 15066 (collaborative robot operation) and added cybersecurity requirements for the first time.
That cybersecurity addition is worth understanding precisely. It covers:
- Access control for the robot control system
- Audit logging of access events and control actions
- Software integrity requirements
- Network security for connected systems
These are meaningful. They close real gaps. But they were designed in the context of traditional control system security — protecting the firmware, authenticating human operators, logging who touched what.
What they don’t cover:
- AI model identity — which model produced the command that caused a robot action?
- Confidence-based gating — at what confidence level was the decision made?
- Human-in-the-loop authorization — was a human required to approve this AI decision before execution?
- AI decision provenance — what was the model reasoning? What was the input? Why did it decide what it decided?
ISO 10218-1:2025 has no mechanism for any of this. The logging requirements capture operator access events. They don’t capture AI decision provenance.
Its US mirror, ANSI/A3 R15.06-2025, adopts the same scope.
OPC UA
The dominant communication standard in industrial automation. Strong data modeling, service-oriented architecture, device discovery, some access control built in.
Designed for deterministic industrial systems — PLCs, CNC machines, factory equipment producing measurable outputs at known intervals. The model assumes you know what data you’re exchanging and can define it in a schema.
LLM-driven robots don’t work that way. They receive ambiguous natural language, process it through a model with a confidence distribution, and produce outputs that may be probabilistic, context-dependent, and difficult to predict. OPC UA has no concept of an AI agent, model identity, or uncertainty-aware decision loops.
There are OPC UA companion specifications for robotics (starting from OPC 40010). They handle motion parameters and device state. They don’t handle the AI layer.
ROS2 / DDS
Where the robotics research community lives. Excellent publish-subscribe messaging architecture, hardware abstraction, sensor integration, and a rich ecosystem of packages.
The security model is an afterthought by default. ROS2 has SROS2, a security extension that adds authentication and encryption. In practice, most ROS2 deployments don’t use it. There’s no audit trail requirement. No access control enforced at the protocol layer. No concept of AI accountability.
For research, this is fine. For production deployments near humans in safety-critical environments, it’s a problem that teams solve — inconsistently, separately, expensively — on their own.
IEC 62443
Operational technology cybersecurity. More directly relevant to RCAN than the others because it explicitly defines security levels (SL1–SL4) and audit requirements.
IEC 62443 SL2 requires authentication and security event logging. SL3 adds advanced integrity controls and log protection. The quantum commitment chain in OpenCastor via QuantumLink-Sim exceeds what SL3 requires for log integrity — HMAC-keyed chains with session-bound secrets that can’t be forged without the in-memory key, even with full access to the log file.
IEC 62443 doesn’t address AI. It’s an OT cybersecurity framework, not an AI accountability framework. The audit and integrity concepts are adjacent, but the scope stops at the traditional control system boundary.
The pattern
Every one of these standards stops below the AI layer. They define how hardware should behave safely. None of them address the question of what happens when an AI model is making the decisions.
This isn’t a criticism — these standards were written before AI decision-making in robots was a real deployment concern. But it means there’s a documented, unfilled gap at exactly the layer that matters most for the next generation of deployments.
Where the active work is happening
The conversation about AI accountability in safety-critical systems is moving. The clearest signal is in Europe, where CEN/CENELEC JTC 21 is working under a mandate from the European Commission to produce harmonized standards for the EU AI Act.
Two active work items are directly in RCAN’s territory:
prEN 18229-1 — “AI trustworthiness framework — Part 1: Logging, transparency and human oversight.” This is WG4, and it’s essentially describing at the abstract level what RCAN §16 implements at the protocol level: how you record what an AI system did, make its behavior transparent, and ensure humans can intervene.
prEN ISO/IEC 24970 — “Logging.” WG3. More generic, covering AI logging requirements across system types. RCAN’s tamper-evident commitment chain is directly relevant here.
Both are racing toward a Q4 2026 deadline before the EU AI Act’s high-risk provisions apply. JTC 21 is writing horizontal standards — applicable across all high-risk AI systems, not just robots. Autonomous robots operating near humans in manufacturing, logistics, and healthcare are classified as high-risk — primarily under Annex I (Article 6(1)) via the Machinery Regulation (EU) 2023/1230, with a 20 January 2027 compliance deadline; some robot applications may also fall under Annex III categories with an August 2026 deadline.
The gap in those drafts is robotics-specific: what does protocol-level AI logging and human-in-the-loop gating actually look like when deployed on a robot that is physically moving in the world? That’s concrete implementation knowledge that abstract standards rarely have in the room when they’re being written.
One distinction that matters
Safety standards state conformity requirements. They don’t prescribe how manufacturers achieve them — that’s intentional, and important.
RCAN isn’t a compliance recipe. It’s the protocol-layer infrastructure that makes those conformity requirements demonstrable. When an assessor asks “show me that this AI system’s decisions are logged, traceable, and subject to human oversight,” RCAN is the mechanism that produces the evidence. Model identity, confidence, human-authorization state, tamper-evident chain — all carried at the protocol layer, independent of which model or provider you’re using.
The standards say what to demonstrate. RCAN provides the plumbing that makes it demonstrable.
A registry, not just a spec
Safety standards are one half of the problem. The other is identity.
I’ve argued before that robotics needs its own ICANN — a global, independent body for registering robots the way ICANN governs domain names. The internet’s namespace is governed by a multi-stakeholder nonprofit because no single company or government should own the map. The same logic applies to robots.
The gap is worse than it looks. Today if you buy a used robot, there is no standard way to verify its provenance — who built it, who owned it, whether it has outstanding safety incidents. If a manufacturer goes bankrupt, there is no registry of last resort. If two robots share the same identifier, there is no dispute resolution process. We are building a physical internet with no addressing authority.
robotregistryfoundation.org/registry is an early attempt at this. Five seed robots are registered — RRN-000000000001 (OpenCastor Bob), RRN-000000000002 (Spot, Boston Dynamics), RRN-000000000003 (Unitree Go2), RRN-000000000004 (SO-ARM101, TheRobotStudio / Hugging Face), and RRN-000000000005 (OpenCastor Alex) — each with provenance, ownership proof, and spec data. The registry has since moved from rcan.dev to robotregistryfoundation.org.
But a static registry on one domain isn’t infrastructure. What it needs to become:
- A machine-readable API — so OpenCastor, fleet managers, and conformity tools can resolve RRNs at runtime, not just humans browsing a website (issue #11)
- A federation protocol — so manufacturers, enterprises, and community groups can operate their own RCAN registries that interoperate with the root, without any single point of control (issue #12)
- Independent governance — a Robot Registry Foundation with multi-stakeholder representation: manufacturers, safety standards bodies, academia, civil society. Not owned by continuonai. Not controllable by any single party (issue #13)
- Manufacturer verification — a tiered trust model that distinguishes community-submitted entries from manufacturer-confirmed ones, verified via DNS and signed attestation (issue #14)
The EU AI Act adds urgency here too. High-risk AI systems must be registered with national authorities under Art. 49. An independent, open robot registry — with cryptographic provenance and verifiable audit chains — could fulfill that function across jurisdictions in a way that a patchwork of national databases cannot.
If you’re working on robot safety standards, fleet infrastructure, or AI governance and think this is worth building: the scaffolding is open, the issues are filed, and the registry is running. Come contribute.
What we’ve built toward this
The work product that standards engagement actually requires isn’t outreach — it’s the thing you’re submitting. Over the past few months that’s meant:
- A clause-by-clause alignment with ISO 10218-1:2025: which RCAN provisions map to which requirements, and where RCAN addresses gaps the standard doesn’t yet cover
- An EU AI Act article mapping: RCAN §16 as the protocol-level implementation for Art. 12 (record keeping), Art. 13 (transparency), Art. 14 (human oversight)
- A conformance test suite: L1/L2/L3 levels, 19 machine-readable test cases, a live checker script — so “RCAN-compliant” is a verifiable claim
- Industrial robot profiles: pre-built configurations for Universal Robots UR5e/UR10e, KUKA iiwa7, Franka Research 3, Boston Dynamics Spot, and a generic differential-drive platform — near-zero-friction adoption for the most common hardware
- A technical brief: “AI Decision Accountability at the Protocol Layer: Addressing the Gap in ISO 10218-1:2025” — written in the language of robot safety engineers, designed to travel with standards committee submissions
ISO committees don’t write standards from scratch. They codify existing practice, published specifications, and documented implementations. The goal is to be the existing work that the next revision cycle builds on.
The timeline
The EU AI Act’s high-risk AI system provisions apply from August 2026. That’s the forcing function that’s driving JTC 21’s accelerated schedule.
The window for establishing an open protocol as the reference implementation for AI accountability in robotics — before every major OEM builds proprietary solutions and the ecosystem fragments the way automotive did — is roughly the next twelve months.
What’s open
Everything is public:
- RCAN spec v1.3: rcan.dev/spec
- OpenCastor v2026.3.12.0: github.com/craigm26/OpenCastor
- QuantumLink-Sim v0.3.0: github.com/craigm26/Quantum-link-Sim
- Compliance and conformance docs: github.com/continuonai/rcan-spec/tree/master/docs
- Technical brief: rcan.dev/whitepaper
If you work in industrial robotics, robot safety, or EU AI Act conformity — or if you’re involved in standards work in any of these areas — the spec is open and contributions are welcome.