Embodied AI Runtime OpenCastor
Universal runtime for embodied AI that connects models, safety controls, channels, and robot hardware through RCAN configuration.
I design and deploy AI that thrives in the real world—where bandwidth is scarce, safety matters, and operations teams need explainable telemetry.
Active Project
OpenCastor ecosystem · 11 repos
Stack
ROBOT.md · MCP · Claude Agent SDK · RCAN · Pi 5
April 2026
1,083 commits · 67 releases shipped
Coverage
~1.98k tests · 5 EU AI Act endpoints live
Declarative robotics, Anthropic-native
A one-person team using Claude Code to build the manifest spec, three language SDKs, an MCP server, an Agent SDK dispatcher, a Claude Code plugin marketplace, and a public registry implementing the first five EU AI Act-aligned compliance endpoints. Every shipped feature exercised against bob, a SO-ARM101 arm registered as RRN-000000000001.
Manifest first
A single ROBOT.md describes the robot well enough that any agent can drive it.
Hardware in the loop
Every shipped feature exercised against real servos before merge.
Compliance-ready
Public registry implementing EU AI Act-aligned endpoints, signed by local key.
Embodied AI Runtime Universal runtime for embodied AI that connects models, safety controls, channels, and robot hardware through RCAN configuration.
Open registry for robot identity — permanent RRNs, multi-tier verification (Community → Verified → Certified → Accredited), Ed25519 ownership proof, and federated identity for the global robot ecosystem.
Cross-platform autonomous AI development with 10+ agents. Deliberate context rotation for long-running tasks on macOS, Linux, Windows, and cloud.
Cloud platform for visualizing codebases as cities. AI agent infrastructure with context recovery, session tracking, and decision tracing.
Spatial computing platform for robot control and XR interfaces. Evolved from LiveCaptionsXR with 612 commits in two months.
One brain, many shells—embodied AI for home robotics. Under $600 in parts with shared memory across robot bodies.
Spatial captions for XR platforms that keep deaf and hard-of-hearing users inside the conversation in virtual spaces.
Clear phases keep AI work grounded in measurable outcomes while giving product, engineering, and operations teams full visibility into progress.
Feed user interviews, field studies, and feasibility prototypes into a concise technical charter and ROI model.
Ship iterative releases that pair robust ML pipelines with test harnesses, telemetry, and stakeholder demos.
Operationalize the solution with playbooks, alerting, and continuous feedback loops to keep accuracy high post-launch.
Real-time detection, tracking, and geospatial analytics for mission-critical video.
Model prototyping, MLOps pipelines, and predictive systems aligned to business outcomes.
Deployments on Jetson, Raspberry Pi, and embedded hardware with low-latency inference.