- Python 96.7%
- Shell 1.9%
- Makefile 1.4%
NeRFCapture source was never implemented (no source file, config model, registry entry, or pyproject extra). Replace all references with RealSense which is fully wired. Also fix CLAUDE.md claiming an opencv visualizer backend that doesn't exist (only dearpygui is implemented). |
||
|---|---|---|
| assets | ||
| imgs | ||
| mujoco_menagerie@feadf76d42 | ||
| scripts | ||
| src | ||
| tests | ||
| .gitignore | ||
| .gitmodules | ||
| CLAUDE.md | ||
| experiment.example.yaml | ||
| Makefile | ||
| mocap.example.yaml | ||
| pyproject.toml | ||
| README.md | ||
| SETUP.md | ||
| teleop.example.yaml | ||
| uv.lock | ||
Shadow Hand Teleop Demo
Teleoperation demo for the Shadow Hand E3M5 (right hand) using MuJoCo for physics simulation and FastAPI to accept inbound control sequences over HTTP. A separate mocap program drives the hand from a webcam, video file, or a RealSense D4xx depth camera.
Architecture
Two cooperating programs that share only the HTTP API and the wire contract under src/common/:
┌────────────────┐ HTTP ┌────────────────┐ step ┌────────────────┐
│ Mocap │ ───────► │ FastAPI │ ───────► │ MuJoCo Sim │
│ source → │ /ctrl │ :8000 │ │ + Viewer │
│ estimator → │ ◄─────── │ routes │ ◄─────── │ + telemetry │
│ retargeter │ /actuat. └──────┬─────────┘ state └──────┬─────────┘
└────────────────┘ │ target (20,) │
▼ ▼
┌──────────────────────────────────────┐
│ Controller (pd / gravity_pd / mpc) │
│ pure numpy: compute(target, state, │
│ velocity, dt) → torques (20,) │
└──────────────────────────────────────┘
The teleop process owns the simulator, the viewer, and the API. The mocap process is independent and only talks to the API; you can also drive /ctrl from any HTTP client.
Quick start
make setup # create venv + pull the MuJoCo Shadow Hand submodule (idempotent)
make init-config # copy *.example.yaml -> *.yaml (one-time, won't overwrite)
make run # open viewer + API server on :8000
To drive the hand from a hand-tracking source:
make setup-mocap-cpu # one-time: install MediaPipe + analytic retargeter
make run-mocap # in a second terminal, while `make run` is up
For the offline gain-sweep experiment (no live API needed):
make setup-experiment
make experiment # prepare → run → plot, writes results/<tag>/
See SETUP.md for details on optional GPU and RealSense extras.
Configuration
Three Pydantic-validated YAML files cover all runtime behaviour:
| File | Used by | What you tune |
|---|---|---|
teleop.yaml |
make run |
server host/port, controller type & gains, render settings, sensors, telemetry windows |
mocap.yaml |
make run-mocap |
source (webcam / video / RealSense), estimator, retargeter, EMA filter, visualizer, calibration |
experiment.yaml |
make experiment |
input video, controller grid, frozen actuators, output tag |
Each loader looks for ./<name>.yaml first and falls back to the tracked ./<name>.example.yaml template with a stderr hint, so the demo runs out of the box.
A minimal teleop.yaml:
server:
host: "127.0.0.1"
port: 8000
sim:
headless: false
controller:
type: gravity_pd # pd | gravity_pd | mpc
kp: 1.0 # scalar or per-actuator list of length 20
kd: 0.1
torque_limit_scale: 1.0
render: # offscreen JPEG stream over /stream WebSocket
enabled: false
width: 640
height: 480
fps: 30
sensors:
enabled: false # fingertip touch grids (8×8 taxels per pad)
telemetry:
enabled: true # per-actuator ring buffers consumed by /control/telemetry
API endpoints
| Method | Path | Description |
|---|---|---|
| GET | /health |
Liveness probe; reports sim and stream readiness |
| GET | /state |
Current joint positions, velocities, applied torques |
| GET | /actuators |
Per-actuator name, index, target range, torque limit |
| GET | /config |
Echo of the loaded teleop.yaml |
| POST | /ctrl |
Set all 20 actuator targets (rad) |
| POST | /sequence |
Enqueue a timed sequence of control commands |
| POST | /clear |
Drop all queued commands |
| GET | /sensors/info |
Static metadata for each fingertip touch grid |
| GET | /sensors |
Current masked taxel readings for all fingertips |
| POST | /sensors/mask |
Toggle one cell, one fingertip, or every fingertip |
| GET | /control/telemetry |
Per-actuator target / state / error / τ stats over 1 s / 10 s / 1 min windows |
| GET | /control/telemetry/raw |
Raw ring-buffer slice for one actuator (for live plotting) |
| GET | /control/gravity |
Latest gravity + passive torque vector (N·m, actuator space) |
| WS | /stream |
JPEG frame stream (only when render.enabled=true) |
Interactive OpenAPI docs at http://127.0.0.1:8000/docs once the server is running.
Example: curl
# Liveness + capability probe
curl http://127.0.0.1:8000/health
# Read full hand state
curl http://127.0.0.1:8000/state
# Hold all actuators at zero
curl -X POST http://127.0.0.1:8000/ctrl \
-H 'Content-Type: application/json' \
-d '{"ctrl": [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]}'
# Enqueue a two-step sequence (close fist, then open)
curl -X POST http://127.0.0.1:8000/sequence \
-H 'Content-Type: application/json' \
-d '{
"commands": [
{"ctrl": [0,0,0,1.0,0,0,1.2,0,1.2,2.5,0,1.2,2.5,0,1.2,2.5,0,0,1.2,2.5], "duration": 2.0},
{"ctrl": [0,0,0,0, 0,0,0, 0,0, 0, 0,0, 0, 0,0, 0, 0,0,0, 0 ], "duration": 2.0}
]
}'
# Read 1 s / 10 s / 1 min control telemetry
curl http://127.0.0.1:8000/control/telemetry
# Watch one actuator over the last second (raw samples for plotting)
curl 'http://127.0.0.1:8000/control/telemetry/raw?actuator=rh_A_FFJ0&window=1s'
Controllers
Three controllers ship in src/teleop/controllers/, all selected by controller.type in teleop.yaml:
type |
What it does | Tuning knobs |
|---|---|---|
pd |
Pure proportional-derivative on actuator-space target | kp, kd (scalar or per-actuator), torque_limit_scale |
gravity_pd |
PD plus a per-step gravity-compensation feed-forward; same gains | same as pd |
mpc |
Infinite-horizon LQR (per-DOF, solved via DARE) | mpc_q_pos, mpc_q_vel, mpc_r_effort, torque_limit_scale |
Adding a new controller
- Create
src/teleop/controllers/my_controller.py, subclassBaseController, implementcompute(target, state, velocity, dt) -> torques. No MuJoCo imports. - Register the class in
REGISTRYinsrc/teleop/controllers/__init__.py. - Add the new name to the
Literal[...]inControllerConfig.type(src/teleop/config.py) and setcontroller.typeaccordingly inteleop.yaml.
from teleop.controllers.base import BaseController
import numpy as np
class MyController(BaseController):
def compute(self, target: np.ndarray, state: np.ndarray,
velocity: np.ndarray, dt: float) -> np.ndarray:
# your algorithm here, return shape (20,) torques in N·m
...
Actuators (20)
| # | Name | Joint / Tendon | Target range (rad) |
|---|---|---|---|
| 0 | rh_A_WRJ2 | rh_WRJ2 | [-0.524, 0.175] |
| 1 | rh_A_WRJ1 | rh_WRJ1 | [-0.698, 0.489] |
| 2 | rh_A_THJ5 | rh_THJ5 | [-1.047, 1.047] |
| 3 | rh_A_THJ4 | rh_THJ4 | [ 0.000, 1.222] |
| 4 | rh_A_THJ3 | rh_THJ3 | [-0.209, 0.209] |
| 5 | rh_A_THJ2 | rh_THJ2 | [-0.698, 0.698] |
| 6 | rh_A_THJ1 | rh_THJ1 | [-0.262, 1.571] |
| 7 | rh_A_FFJ4 | rh_FFJ4 | [-0.349, 0.349] |
| 8 | rh_A_FFJ3 | rh_FFJ3 | [-0.262, 1.571] |
| 9 | rh_A_FFJ0 | tendon (FFJ1+FFJ2) | [ 0.000, 3.142] |
| 10 | rh_A_MFJ4 | rh_MFJ4 | [-0.349, 0.349] |
| 11 | rh_A_MFJ3 | rh_MFJ3 | [-0.262, 1.571] |
| 12 | rh_A_MFJ0 | tendon (MFJ1+MFJ2) | [ 0.000, 3.142] |
| 13 | rh_A_RFJ4 | rh_RFJ4 | [-0.349, 0.349] |
| 14 | rh_A_RFJ3 | rh_RFJ3 | [-0.262, 1.571] |
| 15 | rh_A_RFJ0 | tendon (RFJ1+RFJ2) | [ 0.000, 3.142] |
| 16 | rh_A_LFJ5 | rh_LFJ5 | [ 0.000, 0.785] |
| 17 | rh_A_LFJ4 | rh_LFJ4 | [-0.349, 0.349] |
| 18 | rh_A_LFJ3 | rh_LFJ3 | [-0.262, 1.571] |
| 19 | rh_A_LFJ0 | tendon (LFJ1+LFJ2) | [ 0.000, 3.142] |
Indices 9 / 12 / 15 / 19 are tendon actuators that couple two finger joints each (50/50 distribution by tendon Jacobian); the rest drive a single joint.
Repository layout
src/
common/ shared schemas & actuator names (the wire contract)
teleop/ sim adapter, controllers, FastAPI app, telemetry, sensors
mocap/ sources, estimators, retargeters, visualizer, HTTP client
experiment/ offline harness: prepare → run → plot
scripts/ run_demo.py, run_mocap.py, run_experiment.py, etc.
tests/ offline unit tests (no MuJoCo / no GPU required)
results/ experiment outputs (gitignored)
imgs/ headline figures used by experiment.tex
License
Apache-2.0 (Shadow Hand model), MIT (this demo code).
