Skip to main content

Building OpenCastor

Declarative robotics
starts with  a manifest.

ROBOT.md is to a robot what CLAUDE.md is to a codebase. One file — YAML frontmatter + markdown prose — so Claude Code, Claude Desktop, Cursor, Zed, Gemini CLI, or any MCP-aware agent can safely operate the robot. I'm a one-person team building the spec, the runtimes, the surfaces, and the public registry — almost entirely with Claude Code.

Try it on your robot

pip install robot-md robot-md-mcp
robot-md init --yes
Sacramento, CA · deaf engineer · accessibility-first by default
Craig Merry — Robotics & AI Engineer

The Ecosystem

Eleven repos. One stack.

A protocol at the bottom (RCAN), a manifest in the middle (robot-md), MCP and Agent SDK surfaces on top, a public registry on the side. Each layer is its own repo, its own release cadence, its own clear concern.

Method

Built with Claude Code

The point isn't that an LLM helped me write code. The point is that a one-person team — me — was able to ship and maintain a multi-repo ecosystem because Claude Code's long-context behavior made the cascade tractable.

A spec change in rcan-spec ripples through rcan-py and rcan-ts; a manifest schema change ripples through robot-md, every consumer, and the registry. Claude Code holds the context that would otherwise be split across a team.

The Anthropic-native bias of the surfaces — MCP server, Claude Code plugin marketplace, Agent SDK on the pendant and dispatcher, Claude as the exclusive eval model in autoresearch — is on purpose. Every Anthropic primitive gets a first-class surface.

By the numbers

April 2026 · single-person team

1,083
Commits across the ecosystem this month
67
GitHub releases shipped in 30 days
~1.98k
Tests across robot-md, rcan-py, rcan-ts (1,100 + 288 + 593)
11
Active ecosystem repos · 5 EU AI Act endpoints live

Hardware in the loop: a SO-ARM101 arm called bob, registered as RRN-000000000001. Every shipped feature is exercised against real servos before merge.

Read the build log →

Earlier work

Before robot-md

Computer vision, accessibility, and edge AI projects that informed the choices in OpenCastor.

Accessibility · XR

LiveCaptionsXR

Spatial captions for XR collaboration. On-device speech recognition with anchored transcripts so deaf/HoH participants can follow conversations inside headsets.

Project page →
Computer Vision

BicycleRadar

Predictive collision avoidance for cyclists. Sensor fusion + ML produced 3.2-second warning windows at 96% prediction accuracy.

Project page →
Edge AI

Dronevade

Edge computer-vision platform for drone detection. Custom YOLO models, RF + thermal fusion, designed for wildfire responders and utilities.

Project page →
Safety · Wearable

HeatCompass

Personal heat-stress monitoring for outdoor workers and athletes. Edge inference on a wrist-worn device.

Project page →