Why I Added Quantum Cryptography to My Robot
AI agents make claims.
“I see a 40cm obstacle at 2.1 meters.” “The path is clear.” “Confidence: 91%.”
Those claims drive actions. Move forward. Turn left. Stop. On a robot with a Hailo-8 and an OAK-D camera, those actions have physical consequences. And when something goes wrong — and it will — the first question is always: what did the agent actually say, and when?
Log files answer that if they haven’t been corrupted, overwritten, or tampered with. But a plain log file has no integrity guarantee. You can’t tell whether a line was appended at runtime or edited after the fact. You can’t prove the sequence was never reordered. And you especially can’t prove that the agent’s stated confidence at timestep 482 was 91% and not 67%.
That’s the problem I built quantum-link-sim’s CommitmentEngine to solve.
The threat model
Let me be clear about what I’m not trying to solve.
I’m not building a production cryptographic protocol for autonomous weapons. I’m not trying to satisfy NIST or get a government contract. This is a Raspberry Pi 5 running a hobbyist robot with a 16GB RAM ceiling and a Hailo-8 NPU.
The threat model is simpler, and also more interesting: can an AI agent make claims that are cryptographically attributable to a specific moment in time, under specific keys, in an append-only chain that can be verified offline?
If the answer is yes, then post-hoc auditing becomes possible. Not just “the logs say X” — but “the chain says X, and here’s the HMAC proof that it was written at position 37 and never moved.”
That’s what I built.
Three key modes
The CommitmentEngine has three modes for deriving encryption keys: classical, quantum, and hybrid (the default).
Classical is what most security systems use. You call os.urandom(32), pipe it through HKDF-SHA256, and you have a 256-bit AES key. Fast — under 0.05ms. Strong against every classical adversary. Vulnerable to a sufficiently capable quantum computer running Grover’s algorithm, which halves the effective key length to ~128 bits. For a Raspberry Pi in 2026, this is overkill. But I wanted to understand the comparison.
Quantum uses BB84 — the first quantum key distribution protocol, published by Bennett and Brassard in 1984. The idea is that you encode bits as quantum states (polarized photons in the original; simulated qubits here), and any attempt to eavesdrop collapses the superposition and introduces measurable errors. If the Quantum Bit Error Rate (QBER) stays below 11%, the key is considered secure under information-theoretic assumptions — meaning the security doesn’t rest on computational hardness, it rests on physics.
In practice, on a Pi without actual quantum hardware, the “quantum” path is a high-fidelity numpy simulation. You can optionally swap in Qiskit’s StatevectorSampler for real circuit simulation, but that costs 50–200ms per key and isn’t worth it on constrained hardware. The numpy BB84 backend takes about 8–15ms per key — which is why there’s a pool.
Hybrid is the default, and the interesting one. The key is XOR(HKDF-SHA256 key, BB84 key). Both keys are independently derived. An adversary has to break both channels simultaneously to recover plaintext. You get classical security guarantees plus QKD entropy. The overhead versus pure quantum is negligible when the pool is warm.
How it actually works
Every commit() call does five things:
1. Serialize and hash the payload.
The payload can be anything — a dict, a string, raw bytes. It gets serialized deterministically (JSON with sorted keys for dicts) and hashed with SHA3-256. That’s the payload_hash. SHA3-256 provides ~128-bit post-quantum security — both SHA3-256 and SHA-256 lose roughly half their effective security bits against Grover’s algorithm. The advantage of SHA3-256 is its Keccak sponge construction (unlike SHA-256’s Merkle-Damgård design), which NIST recommends for post-quantum contexts.
2. Derive a key.
In hybrid mode: generate a classical key via HKDF-SHA256, draw a quantum key from the pool, XOR them. The pool runs in a background thread and keeps 32 pre-generated BB84 keys warm. A pool hit costs under 0.15ms. A pool miss falls back to live BB84 (~10–15ms) or classical HKDF if BB84 fails entirely.
3. Encrypt the payload with AES-256-GCM.
The nonce is 96 bits from os.urandom. The additional authenticated data (AAD) is the payload hash — this cryptographically binds the ciphertext to the hash. You can’t swap a ciphertext in without breaking GCM authentication.
4. Extend the HMAC chain.
This is the part that makes the log auditable. Every record has a chain_hash field computed as:
chain_hash = HMAC-SHA256(chain_secret, prev_chain_hash || payload_hash)
The chain_secret is a 32-byte secret generated at engine startup, held in memory, never written to the log. This means that even with full access to the JSONL log file, an attacker cannot forge a valid chain — they’d need the secret to produce correct HMAC values.
5. Append to the chain and optionally persist to JSONL.
The in-memory chain is append-only and thread-safe. If a storage_path is set, each record is written as a single compact JSON line. The file is portable — you can ship it to another machine that holds the chain_secret and run verify_chain() offline.
The QBER backstop
BB84 keys include a QBER measurement — the fraction of sifted bits that disagreed between Alice and Bob. In a real optical system, QBER > 11% means an eavesdropper is present. In simulation, it reflects whether the configured eve_probability triggered an interception.
Keys with QBER above threshold are rejected. The pool retries up to 3 times before falling back to classical. This means the CommitmentRecord always reports the actual QBER and whether the key was key_secure — so you can see in the audit log if a commitment was made under a degraded key.
What a record looks like
{
"id": "a3f7c1d9-82b4-4e1c-9a0f-d3c4e5f61b2a",
"timestamp_ns": 1740721482934812416,
"payload_hash": "9e3e4c5a1d2b...",
"ciphertext_hex": "4a2f8b...",
"nonce_hex": "c3d4e5f6...",
"key_mode": "hybrid",
"qber": 0.034821,
"key_secure": true,
"chain_hash": "7f8a9b0c1d2e...",
"proof": {
"mode": "hybrid",
"components": ["HKDF-SHA256", "pool"],
"combination": "XOR",
"qber": 0.034821,
"quantum_secure": true,
"note": "Key = XOR(HKDF-SHA256, BB84). Adversary must break both classical KDF and QKD channel."
}
}
No raw key material anywhere. The proof field carries enough metadata to know what key derivation path was taken. The chain_hash ties this record to every preceding one. The payload_hash links the ciphertext to the plaintext without storing the plaintext.
Integration with OpenCastor
OpenCastor v2026.2.27.2 ships with castor/quantum_commitment.py, which builds a CommitmentEngine from the RCAN config block and exposes audit.py’s verify_quantum_chain() function.
The config:
security:
commitment:
enabled: true
mode: hybrid
pool_size: 32
n_qkd_bits: 512
qber_threshold: 0.11
use_qiskit: false
storage_path: .opencastor-commitments.jsonl
export_secret_path: .opencastor-chain-secret.hex
When enabled, every agent decision — every perception result, every planner output, every action the robot takes — gets committed to the chain before execution. The chain secret is exported to a separate file. If you want to verify an audit log, you need both.
The actual wiring into castor/main.py startup is the next step. It’s implemented but not yet called on boot.
Why quantum at all?
Fair question. This is a hobby robot. There are no quantum computers attacking my log files.
The honest answer is that I’m building quantum-link-sim as infrastructure for a specific long-term bet: AI-generated claims will eventually need cryptographic accountability, and the most defensible version of that uses key derivation that doesn’t rest solely on computational hardness assumptions.
We’re at an inflection point where AI agents are making consequential decisions in physical systems — robots, vehicles, medical devices. “The model said X” is not sufficient audit evidence if the log can be silently edited. HMAC chains with in-memory secrets aren’t either, if the machine itself is compromised.
QKD doesn’t solve the compromised-machine problem. Nothing does. But it adds a genuinely orthogonal security primitive. And building it now — when the stakes are a Raspberry Pi robot bumping into a wall — means the architecture is ready when the stakes are higher.
The code
quantum-link-simv0.2.1: github.com/craigm26/Quantum-link-Sim- OpenCastor v2026.2.27.2: opencastor.com
- CI: Python 3.10/3.11/3.12, ruff, pytest, coverage ≥50% — all green.
pip install quantum-link-sim
# With Qiskit backend:
pip install "quantum-link-sim[quantum]"
The commitment API:
from quantumlink_sim.commitment import CommitmentEngine, KeyMode
engine = CommitmentEngine(mode=KeyMode.HYBRID)
engine.start()
record = engine.commit({"action": "move_forward", "linear_x": 0.3, "confidence": 0.91})
print(record.id, record.qber, record.key_secure)
ok, broken_at = engine.verify_chain()
That’s it. Thread-safe. Async-safe (commit_async() available). Portable JSONL output. Offline-verifiable with the exported chain secret.
The physics is simulated. The cryptography is real.