A Full Day of Building: RCAN Swarm Safety, §17 Distributed Registry, and What the Robot Registry Foundation Needs to Become Real
Update (March 12, 2026): RCAN v1.3 is now current — §18–§20 and Appendix B promoted to Stable, §21 (Registry Integration) introduced. Registry moved to robotregistryfoundation.org. Read what’s new →
Today was one of those rare sessions where everything that was supposed to ship actually shipped, every CI failure got traced to its root cause and fixed, and the architecture got meaningfully clearer in the process.
Here’s what happened, what it means, and what still needs to be built.
What shipped today
Four coordinated releases across the RCAN ecosystem:
| Package | Version | Distribution |
|---|---|---|
opencastor | v2026.3.13.11 | PyPI |
rcan (Python SDK) | v0.3.0 | PyPI |
@continuonai/rcan-ts | v0.3.0 | npm |
| rcan-spec | v1.3.0 | rcan.dev / Cloudflare Pages |
These aren’t independent releases. They’re a coordinated spec + runtime + SDK stack. When you upgrade rcan-py, you get the same NodeClient API that rcan-ts and OpenCastor speak. When the spec ships a new section, all three downstream packages absorb it at the same time.
OpenCastor is an implementation of the RCAN specification
Let me be direct about what OpenCastor is and isn’t.
OpenCastor is a robot runtime that implements the RCAN specification. It’s one implementation. The specification — RCAN — is the open protocol that defines how robots address each other, how AI decisions get gated and logged, how identity is verified, and how robots in a network can trust each other.
The relationship looks like this:
RCAN Specification (open protocol, CC BY 4.0)
├── OpenCastor (Python runtime — one implementation)
├── rcan-py (Python SDK — for building your own)
├── rcan-ts (TypeScript/Node SDK — for web + edge)
└── Your implementation — welcome
Any hardware provider, any software company, any research lab can implement RCAN. The protocol doesn’t care whether your robot runs on a Raspberry Pi, a Jetson, a Qualcomm RB3 Gen 2, or a custom ASIC. It doesn’t care whether your AI is running local LLaMA, GPT-4o, or a fine-tuned model on a Hailo NPU. It doesn’t care whether you’re writing Python, Rust, Go, or C++.
What it cares about:
- Identity: Every robot has an RRN (Robot Registry Number) — a globally unique, verifiable identifier anchored at robotregistryfoundation.org or a federated node.
- Accountability: Every AI decision gets logged with model identity, confidence score, and a tamper-evident HMAC chain.
- Safety gates: Confidence gates and human-in-the-loop gates are first-class protocol constructs, not afterthoughts bolted onto individual implementations.
- Verifiability: Any party with the right access — operator, regulator, insurer — can independently verify the audit trail.
OpenCastor ships all of this out of the box. But you don’t need OpenCastor to be RCAN-compliant. You need to implement the protocol.
The new claim: RCAN-Swarm Safe
Today we formalized something the protocol was already technically capable of but hadn’t explicitly claimed: RCAN-Swarm Safety.
The idea is simple. In a traditional multi-robot system, when Robot A receives a command from Robot B, there’s usually no standard way to verify:
- Is B actually who it claims to be?
- Has B been certified as safe to operate in this context?
- Will this interaction be auditable after the fact?
RCAN answers all three:
Peer identity verification:
from rcan import NodeClient
client = NodeClient()
peer = client.resolve("RRN-BD-000000000042")
tier = peer['record'].get('verification_tier', 'community')
if tier not in ('certified', 'accredited'):
raise SecurityError(f"Peer robot not verified for swarm commands: {tier}")
Offline resilience: The §17 Distributed Registry Node Protocol means peer identity resolution works even when the central registry is temporarily unreachable. Every node maintains a local SQLite cache with stale fallback — the swarm keeps working.
Full audit: Every swarm interaction is logged to the commitment chain. Every action records who did it, what model made the decision, at what confidence, at what timestamp.
For a robot to claim RCAN-Swarm Safe, it needs:
- A valid RCAN config with a registered RRN
- Verification tier ≥
verified - Commitment chain enabled
- At least one confidence gate
- HITL gate configured for any swarm-level commands
OpenCastor satisfies all of this. The reference robot (Bob, running on a Raspberry Pi 5 with Hailo-8 and OAK-D) is the first RCAN-registered swarm-safe robot: RRN-000000000001 in the live registry.
The full use-case documentation lives at rcan.dev/use-cases/swarm.
§17: The Distributed Registry Node Protocol
The other major piece that shipped today was §17 of the RCAN specification — the Distributed Registry Node Protocol.
The problem it solves: a centralized robot registry is a single point of failure and a governance bottleneck. As the robot ecosystem scales, you can’t funnel every identity resolution through one server. Industrial deployments, offline facilities, air-gapped environments — they all need the registry to work locally.
§17 specifies:
- Node delegation: Namespaced prefixes (e.g.,
RRN-BD-...for Boston Dynamics) can be delegated to manufacturer-operated nodes - Federated resolution: Any resolver walks the delegation tree — root registry → namespace node → local cache
- Sync protocol: Nodes synchronize records on a push/pull basis with conflict resolution (root wins on namespace conflicts to prevent poisoning)
- Node manifest: Every node serves
/.well-known/rcan-node.json— its capabilities, supported namespaces, public key - Error codes 6001–6006: Standardized error taxonomy for federation failures
This is the infrastructure layer that lets RCAN scale beyond a single centralized registry. It’s what makes the Robot Registry Foundation (more on that below) technically viable.
The spec page is at rcan.dev/spec/section-17.
What also shipped: a full CI debugging marathon
Shipping four coordinated releases with green CI across all four repos sounds simple. It took most of the day and hit every class of problem a multi-repo ecosystem can hit.
In rough order:
rcan-spec:
- Astro content collection
robots/was defined in the schema but the directory was empty — Astro throws at build time on empty data collections. Fixed by registering the first real robot (Bob,RRN-000000000001). - Python code block in
error-codes.astrohad raw{e.rrn}— Astro treats{}as JSX expressions, soewas undefined at SSG render time. Fixed with{e.rrn}HTML entities. release-notify.ymlhad a YAML parse error from a multi-line--bodystring that fell to column 0 inside a literal block scalar. Rewrote to use env vars.- SDK Smoke Test was using
import rcan from '@continuonai/rcan-ts'(default import) — rcan-ts only exports named exports. Fixed toimport * as rcan.
rcan-py:
test_version.pywas asserting"0.2.0"in the version output — the package was just bumped to0.3.0. Fixed to usercan.__version__dynamically.sample.rcan.yamlfixture was missing the requiredagent:key. Added it.spec-smoke.ymlwas unpackingvalidate_config()as a(bool, list)tuple — the function returns aValidationResultobject. Fixed.
OpenCastor:
- Release workflow gate checks CI on the exact tag commit SHA. The tag was pushed on
c72441b(version bump), but CI on that commit wasn’t visible to the gate at the time the Release workflow started. Deleted and re-pushed the tag at9a0f617(HEAD, CI green). - 146 ruff lint errors in test files — mostly unused imports and unsorted import blocks. Auto-fixed.
test_deepseek_provider.py— provider tests fail withoutopenaiinstalled locally. Addedpytestmark = pytest.mark.skipif(...)to skip the whole module gracefully.- Integration test was unpacking
validate_config()as a tuple — same fix as rcan-py.
Every failure had a clear root cause. None were random. The pattern that emerges: when you version-bump across multiple repos simultaneously, every hardcoded version string and every assumption about return types becomes a latent bug. The fix is dynamic version checks and interface-based assertions, not literal string matches.
What the Robot Registry Foundation needs
The Robot Registry Foundation (RRF) is the governance entity that should eventually own the RCAN protocol and the robot registry (now live at robotregistryfoundation.org). Right now it’s a governance charter and a set of commitments — not yet a legal entity. Here’s an honest accounting of what it needs to become real.
1. A legal home
The RRF needs to be incorporated — most likely as a US 501(c)(6) trade association or a 501(c)(3) foundation, depending on whether the primary mission is industry coordination or public benefit. The distinction matters for:
- Tax treatment of member dues
- The ability to accept grants (c3 only)
- Whether manufacturers can deduct membership as a business expense (c6 is cleaner here)
The EU analog would be a Belgian AISBL (similar to how many open standards bodies incorporate in Brussels). ISO liaison status is easier to achieve as a legal entity.
What’s needed: Legal counsel familiar with standards body incorporation. Budget: ~$15–30k for incorporation + initial legal setup.
2. A domain with standing
rcan.dev works as a developer-facing home. But for ISO/TC 299 engagement, regulatory credibility, and the kind of institutional gravity that makes a standards body feel permanent, the foundation needs robotregistryfoundation.org (and .com).
The rcan.dev site can redirect. The foundation’s public face — charter, governance, board, membership — should live at a domain that signals permanence and organizational identity.
Update (March 12): Done. robotregistryfoundation.org is live — the registry has moved there. rcan.dev/registry now redirects to robotregistryfoundation.org/registry/. The domain piece is resolved; the remaining work is the legal entity and governance content that should live there.
3. An independent technical board
The protocol spec currently lives in continuonai/rcan-spec. That’s fine for bootstrapping. For a legitimate open standard, the spec needs to be governed by a body that includes:
- At least one major robot manufacturer (Universal Robots, Boston Dynamics, KUKA, or similar)
- At least one AI provider (Anthropic, Google, or equivalent)
- An academic safety researcher (MIT CSAIL, CMU Robotics, ETH Zürich)
- An independent standards professional (someone who has served on IEC or ISO committees)
- A regulator or regulatory liaison (NIST, EU ENISA, or equivalent)
The key insight from Roberta Nelson Shea’s response (she convenors ISO/TC 299 WG3) is that RCAN should not position itself as a compliance mechanism — it’s a communication protocol that makes verifiability possible. Standards say what to demonstrate; RCAN provides the plumbing that makes it demonstrable. The board needs people who understand this distinction and can articulate it to standards bodies.
What’s needed: Relationship building. The contacts exist — the conversations need to start.
4. Manufacturer verification infrastructure
The four-tier verification system (community → verified → certified → accredited) is implemented in the registry API. But the actual verification process — the human and legal procedures behind each tier — needs to be defined:
- Community: Self-attested. Automated. No cost.
- Verified: Manufacturer claims ownership of the RRN namespace. Requires email verification + Terms of Service agreement. ~$0.
- Certified: Independent test lab verifies the robot hardware against the RCAN YAML config. Think CE marking, but for robot identity. ~$500–2000 per robot model.
- Accredited: Full conformance assessment — commitment chain integrity, AI accountability layer, §16 implementation, safety gate configuration. Aligned with ISO 10218-2 conformity assessment. ~$5000+ per deployment.
The technical side (PATCH /api/v1/robots/:rrn/verify) is live. The procedural side doesn’t exist yet.
What’s needed: A test lab partnership. Likely TÜV SÜD, Bureau Veritas, or UL for the hardware verification tier. An accreditation body willing to run the top-tier conformity assessments.
5. Recurring operating revenue
A standards body needs money to operate independently. The sustainable model for RCAN/RRF is:
- Free tier: Open protocol, free SDK, free RRN registration (community tier), free rcan-validate CLI
- Verified tier: $99/year per manufacturer namespace — enough to cover infrastructure, not enough to create a barrier
- Certified/Accredited tiers: Test lab fees (handled by lab, not RRF) + RRF listing fee ($500–1000/year)
- Enterprise support: Custom SLAs, priority issue response, private registry nodes — $20k+/year for large OEMs
This structure keeps the protocol open while generating enough revenue to fund a part-time executive director and cover legal/infrastructure costs.
What’s needed: A pricing page, a Stripe integration in the Cloudflare Workers API, and the first paying customers.
6. ISO/TC 299 engagement
The most important long-term path is getting RCAN referenced in, or aligned with, ISO 10218 (industrial robot safety) or one of the forthcoming AI-in-robotics standards being developed under JTC 21.
Roberta’s response made clear that the framing matters: RCAN provides the audit infrastructure that lets conformity assessors interrogate AI decisions in deployed robots. That’s valuable to safety standards because it answers the question “how do you demonstrate that your AI was operating within spec?” with something concrete and machine-verifiable.
The right initial ask to ISO/TC 299 WG3 isn’t “adopt our standard” — it’s “let us be a technical resource for the AI accountability gap.” Once that conversation is established, formal liaison status and eventual informative reference in a Technical Report is achievable.
What’s needed: Draft a technical brief (2 pages max) explaining RCAN’s role as audit infrastructure — not a compliance mechanism. Target the next WG3 meeting agenda. The JTC 21 contact (ENISA, EU AI Act harmonized standards) is a parallel path.
The honest state of things
The protocol exists and is implemented. The SDKs work. The CI is green. Five robots are now registered at robotregistryfoundation.org.
What doesn’t exist yet: legal incorporation, institutional relationships, a paying customer, and anyone besides me running this day-to-day.
That’s not a criticism — it’s a roadmap. Every serious open standard started with one person convinced the problem was real before anyone else was. ICANN started as Jon Postel’s personal list. The Linux Foundation started as a mailing list.
The technical foundation is solid. The governance and institutional work is next.
If you’re building AI-driven robots and you want the protocol to succeed — whether for selfish reasons (you want the audit trail) or principled ones (you want the industry to have a safety standard before someone gets hurt) — the most useful thing you can do right now is register your robot, use the SDK, and tell me what’s broken.
The second most useful thing is to introduce me to someone at a robot manufacturer who cares about AI accountability. That’s where this goes next.