A modular, extensible framework for research and development in robotic manipulation, supporting both model-based and learning-based approaches. Initially focused on the SO-ARM100 manipulator, but designed for generalization to other fixed-base robots.
- Provide a unified stack for manipulation research (hardware, simulation, control, planning, learning)
- Support both model-based and learning-based pipelines
- Enable sim-to-real transfer and benchmarking
- Modular design for easy extension to new robots and tasks
fullstack-manip/
├── hardware/ # Robot models, CAD, system ID
├── state_estimation/ # Multi-sensor fusion, calibration
├── simulation/ # MuJoCo and other simulators
├── control/ # Low- and high-level controllers
├── planning/ # Motion planning, trajectory gen
├── perception/ # Visual servoing, vision modules
├── learning/ # RL, VLA, datasets
├── evaluation/ # Benchmarking, metrics
├── scripts/ # Utilities, launchers
├── tests/ # Unit/integration tests
├── docs/ # Documentation, diagrams
- SO-ARM100 support (URDF, sysID, comms)
- Multi-sensor state estimation (camera, mocap, IMU)
- MuJoCo-based simulation with sim2real tools
- Model-based stack: planning, visual servoing, MPC
- Learning-based stack: RL, VLA (Open PI, LeRobot)
- Modular, extensible, research-friendly
- Clone the repository
- Install dependencies:
pip install -r requirements.txtor useenvironment.yml - Explore example scripts in
scripts/ - See
docs/for architecture and usage guides
hardware/— URDF, CAD, sysID datastate_estimation/— Sensor fusion, calibrationsimulation/— MuJoCo envs, assets, sim2realcontrol/— Low/high-level controllersplanning/— MoveIt, plannersperception/— Vision, visual servoinglearning/— RL, VLA, datasetsevaluation/— Metrics, benchmarkingscripts/— Launchers, utilitiestests/— Testingdocs/— Documentation
Contributions are welcome! Please open issues or pull requests for new features, bug fixes, or documentation improvements.
MIT License (see LICENSE file)