Teaching robots to speak MCP — and why the EU AI Act made us build it right
There’s a pattern I keep seeing in physical AI: the robot runtime is too smart and the interface is too dumb.
The runtime does real work — signing commands, enforcing trust levels, tracking hardware provenance, managing telemetry. But to actually use it, you write custom HTTP glue, maintain auth token handling, and build integrations for every new AI agent you want to connect. The intelligence layer keeps getting smarter while the interface layer stays bespoke.
This week I shipped castor mcp for OpenCastor — a full Model Context Protocol server that exposes the robot runtime as 12 discoverable tools. The motivation was partly convenience, partly the EU AI Act, and partly watching the broader MCP ecosystem move fast and realizing our unique layer was buried behind integration friction.
What MCP gives you on a robot
The before state: if you wanted Claude Code to command Bob (my Pi 5 + Hailo-8 robot), you were either going through WhatsApp relay (three hops), writing raw HTTP calls against the gateway (works, but nothing is discoverable), or using the Flutter app (great for me, useless for an AI agent).
The after state:
claude mcp add castor -- castor mcp --token $CASTOR_MCP_TOKEN
Now Claude Code, Codex, Gemini, or any MCP-capable agent can call robot_status, robot_command, harness_get, fleet_list, and eight more tools directly. No custom code. The tools self-describe. Any model, any provider.
The LoA tier system
The interesting design decision was how to handle access control. A robot running physical AI has a genuine trust problem: you want a monitoring agent to read telemetry freely, but you don’t want it deploying harness configs or triggering upgrades. These are different risk levels and they need different authorization.
RCAN v2.2 (the protocol OpenCastor implements) already has Level of Assurance built in. I mapped that directly to MCP tokens:
| Tier | LoA | Who uses it | Tools |
|---|---|---|---|
| Read | 0 | Monitoring agents, dashboards | status, telemetry, fleet_list, rrf_lookup |
| Operate | 1 | Interactive agents, research runners | robot_command, harness_get, research_run |
| Admin | 3 | Deployment scripts, human-supervised ops | harness_set, system_upgrade, loa_enable |
The tier is tied to the token, not the model. A Gemini agent and a Claude Code session with identical tokens get identical access. The enforcement happens server-side, before the request hits the gateway — it’s not an app-layer check.
mcp_clients:
- name: "claude-code-laptop"
token_hash: "sha256:abc123..."
loa: 3
- name: "gemini-monitor"
token_hash: "sha256:def456..."
loa: 0
The EU AI Act angle
I’ve been working through EU AI Act compliance for OpenCastor since the August 2026 deadline is closer than it looks. Article 11 requires technical documentation covering system identity, hardware provenance, model provenance, safety controls, and post-market monitoring. For a physical robot, that’s a lot of records to maintain.
What I realized while building the MCP server is that we’d already built the underlying infrastructure — we just hadn’t made it queryable. The Robot Registry Foundation stores the full provenance chain: the robot’s identity (RRN), its components (RCNs for CPU/NPU/camera), its loaded models (RMNs), and its harness configuration (RHN). All signed with ML-DSA-65.
Now rrf_lookup makes that chain accessible from any MCP client:
rrf_lookup("RRN-000000000001") → Bob's identity, manufacturer, firmware_hash
rrf_lookup("RCN-000000000002") → Hailo-8 NPU, 26 TOPS, parent: Bob
rrf_lookup("RMN-000000000002") → OpenVLA 7b-1.0, apache-2.0, local
rrf_lookup("RHN-000000000001") → dual-brain harness, rcan_version: 2.2
A compliance agent — running on any model — can reconstruct the full Art. 11 record from four tool calls. That’s not an accident. The MCP interface made me think clearly about what “auditable” actually means: the records need to be machine-readable and retrievable by any authorized agent, not just stored somewhere.
The LoA gating maps to Article 9 (risk management) — the level of assurance required for a control command is declared, enforced, and logged. Article 14 (human oversight) maps to the token generation flow: a human explicitly runs castor mcp token --name NAME --loa 3 to grant elevated access, and can revoke it by removing the entry from the robot config.
What’s next
The current transport is stdio — works with Claude Code and any local MCP client today. HTTP/SSE is on the roadmap, which will let remote agents subscribe to live telemetry streams rather than polling. When that ships, the same robot becomes accessible to multiple agents simultaneously, each operating within their declared LoA.
The broader thing I’m excited about is the separation of concerns this enables. OpenCastor handles signing, trust enforcement, fleet registry, and hardware abstraction. The AI brain is pluggable — swap providers, run experiments with different models, mix specialized agents for different scopes. The runtime doesn’t care what model is on the other end of the MCP connection.
That’s what physical AI should look like: the intelligence layer is commoditized, the compliance layer is solid, and the interface is standard.
OpenCastor is open source. The MCP server is in castor/mcp_server.py. Issue #775 has the full design rationale.