Contribute: What If Every Idle Robot Ran Science?
Contribute: What If Every Idle Robot Ran Science?
Chris Albon, Director of ML at Wikimedia Foundation, tweeted something that lodged itself in my brain:
“How long before someone launches SETI@Home but the AGI era?”
He was joking. We built it anyway.
First, What Is OpenCastor?
Before we get to the feature, some context matters — because this isn’t about one specific robot setup.
OpenCastor is an open-source runtime layer that sits between a robot’s hardware and its AI. It doesn’t care what your robot looks like. A Raspberry Pi with a Hailo NPU on a desk. A ROS2 rover with a Jetson. A warehouse robot running Ubuntu. An educational arm in a classroom. If it has a compute board and can run Python, OpenCastor can be its runtime.
Think of it as the operating layer that handles the stuff every robot needs but nobody wants to build from scratch: authentication and access control, safety invariants, telemetry streaming, agent harness management, skill execution, and a standardized API. You bring the hardware and the AI model. OpenCastor gives them a way to talk to each other safely.
The entire project is open source: github.com/craigm26/OpenCastor.
RCAN: The Robot Communication & Authentication Network
OpenCastor implements the RCAN specification — an open protocol for robot identity, communication, and safety. RCAN defines:
- RURI (Robot URI): A canonical identifier for every robot, like a URL but for physical machines. Format:
rcan://registry.domain/manufacturer/model/version/serial - Scopes: A hierarchy of permission levels —
discover,status,chat,control,safety— so a guest can check a robot’s status but can’t drive its motors - Message types: A standardized set of commands, responses, telemetry, and safety signals that work across any RCAN-compatible robot
- Safety invariants: Protocol-level guarantees like P66 (priority preemption) that are enforced in code, not prompts
- Level of Assurance (LoA): JWT-based identity verification so robots know who’s talking to them
RCAN is maintained by ContinuonAI with reference implementations in both Python and TypeScript.
The Robot Registry
Every OpenCastor robot registers with a canonical identity. Our reference setup uses Firebase Firestore, but the registry is an abstraction — you can back it with any database. Each robot gets:
- A unique RRN (Robot Resource Name)
- Live telemetry streaming (status, hardware profile, active model, contribution stats)
- Fleet-level visibility through the OpenCastor client app
This registry is what makes fleet coordination possible. You can’t coordinate idle compute across robots if you don’t know what robots exist, what hardware they have, and whether they’re busy.
The Insight
Here’s the thing about robots: they idle. A lot.
Our reference robot “Bob” — a Pi 5 with a Hailo-8L NPU — sits idle about 20 hours a day. But Bob is just one configuration. An OpenCastor robot could be a Jetson Orin Nano doing 40 TOPS, or a Coral TPU doing 4 TOPS, or a plain CPU with no accelerator at all. The runtime detects whatever hardware is present and profiles it automatically:
def get_hw_profile() -> dict:
profile = {'cpu_cores': os.cpu_count() or 1}
# Detect Hailo NPU
# Detect Coral TPU
# Detect CUDA GPUs
# Detect any accelerator via standard interfaces
return profile
The point isn’t “26 TOPS.” The point is that every robot has some compute, and most of it goes unused most of the time. Even a 4-core ARM CPU doing nothing for 8 hours is waste. Across a fleet, the aggregate is meaningful.
| Fleet Size | Avg TOPS | Idle Hours/Day | TOPS-Hours/Day |
|---|---|---|---|
| 100 robots | varies | 8h | depends on fleet hardware |
| 1,000 robots | varies | 8h | significant |
| 10,000 robots | varies | 8h | approaching research cluster |
The numbers scale with whatever hardware the fleet has. NPU-equipped robots get NPU-optimized work units. CPU-only robots get CPU tasks. The coordinator matches work to capability.
What We Built
castor contribute is an idle compute donation skill integrated across the entire OpenCastor ecosystem. It’s not a side script or a cron job — it’s a first-class RCAN scope with protocol-level safety guarantees.
How It Works
Robot idle > threshold
↓
Active command? → YES → Wait
↓ NO
Fetch work unit from coordinator
↓
Run on best available hardware (NPU → GPU → CPU)
↓
Command received? → YES (P66) → Cancel immediately
↓ NO
Submit result to coordinator
↓
Log to telemetry → fleet dashboard
The core loop is hardware-agnostic. It asks the coordinator for a work unit, runs it on whatever accelerator is available (falling back gracefully through NPU → GPU → CPU), and submits the result. If a real command arrives mid-computation, the work unit is cancelled within 100ms.
P66 Preemption: Non-Negotiable Safety
This is the part that matters most. OpenCastor’s P66 invariant is absolute: any active command preempts contribution immediately. This is enforced at the runtime level, below the AI model, below the skills. The contribute scope sits at level 2.5 — above chat (so casual conversation doesn’t interrupt a computation) but below control (so any real command takes priority).
P66 is code, not a prompt. Contribution can’t override it because contribution can’t even see it.
Coordinators
The system uses a coordinator abstraction:
class Coordinator(abc.ABC):
def fetch_work_unit(self, hw_profile, projects) -> WorkUnit | None: ...
def submit_result(self, result: WorkUnitResult) -> bool: ...
Two methods. Want to connect to BOINC, Folding@home, or your own research platform? Implement those two methods. Today we ship:
- BOINCCoordinator: Connects to BOINC project servers via XML-RPC. BOINC has been running distributed science since 2002 — climate modeling, protein folding, gravitational wave detection.
- SimulatedCoordinator: For CI and development. Generates synthetic work units to verify the full pipeline.
The RCAN Protocol Addition
contribute is the first new scope added to RCAN since v1.6, now canonicalized in v1.8. Three new message types:
| Message | Direction | Purpose |
|---|---|---|
CONTRIBUTE_REQUEST | coordinator → robot | Deliver a work unit |
CONTRIBUTE_RESULT | robot → coordinator | Return results |
CONTRIBUTE_CANCEL | robot → coordinator | Signal cancellation |
Consent Is Explicit
Owning a robot and chatting with it does not mean you’ve agreed to donate compute. contribute requires explicit opt-in, separate from any other RCAN consent. It defaults to enabled: false.
agent:
contribute:
enabled: true
projects: [climate, biodiversity]
coordinator: boinc
idle_after_minutes: 15
The Full Stack
What makes this different from “just run BOINC on a computer” is the integration depth across an open-source robotics ecosystem:
- Runtime (
castor/contribute/): Coordinator abstraction, hardware profiling, cancellation-aware runner, fleet coordination - RCAN Spec (v1.8)::
contributescope, message types, telemetry schema, credit/reputation system, fleet coordination protocol - Python SDK (
rcan-pyv0.6.0):CONTRIBUTE_REQUEST/RESULT/CANCELmessage types, scope validation - TypeScript SDK (
rcan-tsv0.6.0): Same message types and scope support for web/Node.js integrations - API:
GET /api/contribute,POST /api/contribute/start,POST /api/contribute/stop,GET /api/contribute/history - CLI:
castor contribute status/start/stop/history - Flutter Client: Real-time contribution panel with stats, project info, enable/disable toggle
- Registry Telemetry: Live stats flow through Firestore to the fleet dashboard
- Docs: opencastor.com/docs/contribute
Everything ships together. Spec, runtime, SDKs, client, docs — all in sync, all open source.
What This Enables
Climate Modeling
climateprediction.net runs ensemble climate simulations — thousands of independent models, embarrassingly parallel, ~15 minutes each. Perfect for robots that idle for hours.
Biodiversity Monitoring
Robots deployed in agricultural or conservation settings already have cameras and microphones. During idle time, they can run species identification inference — BirdNET bioacoustics, plant classification, insect surveys. The sensor data is the science.
Protein Folding
Rosetta@home work units involve scoring protein conformations — pure matrix math that NPUs excel at. A Hailo-8L or Coral TPU handles these faster than a desktop CPU at a fraction of the power.
Humanitarian AI
Medical image pre-screening for underserved regions. TB detection from chest X-rays. Language model fine-tuning for low-resource languages. Edge inference keeps patient data local.
The Bigger Picture
We’re entering an era where millions of robots will be deployed — homes, warehouses, farms, hospitals, classrooms, public spaces. Every one will have compute hardware that sits idle most of the time.
SETI@Home proved in 1999 that idle compute could be donated for science. Folding@home proved during COVID that it could matter urgently. But both relied on volunteers installing software on personal computers.
Robots are different. They’re already networked. They already have standardized APIs (RCAN). They already report telemetry. They already have identity and authentication. The infrastructure for coordination exists. The consent model exists. The safety guarantees exist.
OpenCastor makes all of this concrete: an open-source runtime that any robot can run, a protocol that any robot can speak, and a registry where any robot can be found. castor contribute is just the natural consequence — if you have the infrastructure, the idle compute question answers itself.
What if contributing idle compute wasn’t an afterthought bolted onto robots, but a capability designed into the runtime from the start? What if “help with science when you’re bored” was as natural as “charge when your battery is low”?
That’s what we built. It’s opt-in, it’s safe, it’s preemptible, and it’s open source.
agent:
contribute:
enabled: true
Your robot was going to idle anyway. Might as well cure something.
castor contribute is available in OpenCastor v2026.3.20+. The RCAN contribute scope is part of the v1.8 specification. Source: github.com/craigm26/OpenCastor. Protocol: rcan.dev. Client: app.opencastor.com.