This repository is the official implementation of OpenTrack, an open-source humanoid motion tracking codebase that uses MuJoCo for simulation and supports multi-GPU parallel training.
[November 30, 2025] LAFAN1 generalist v1 released. Now you can track cartwheel, kungfu, fall and getup, and many other motions within a single policy.
[September 19, 2025] Simple Domain Randomization released.
[September 19, 2025] Tracking codebase released.
- Release motion tracking codebase
- Release simple domain randomization
- Release pretrained LAFAN1 generalist v1 checkpoints
- Release DAgger code
- Release AnyAdapter
- Release more pretrained checkpoints
- Release real-world deployment code
-
Clone the repository:
git clone git@github.com:GalaxyGeneralRobotics/OpenTrack.git
-
Create a virtual environment and install dependencies:
uv sync -i https://pypi.org/simple
-
Create a
.envfile in the project directory with the following content:export GLI_PATH=<your_root_path>/OpenTrack export WANDB_PROJECT=<your_project_name> export WANDB_ENTITY=<your_entity_name> export WANDB_API_KEY=<your_wandb_api_key>
-
Download the mocap data and put them under
data/mocap/. Thanks for the retargeting motions of LAFAN1 dataset from LocoMuJoCo!The file structure should be like:
data/ |-- xmls |- ... |-- mocap |-- lafan1 |-- UnitreeG1 |-- dance1_subject1.npz |--- ...
- Initialize the MuJoCo environment:
source .venv/bin/activate; source .env;
-
Download pretrained checkpoints and configs from checkpoints and configs, and put them under
experiments/. Visualization results: videos. -
Run the evaluation script:
# your_exp_name=<timestamp>_<exp_name> python play_policy.py --exp_name <your_exp_name> [--use_viewer] [--use_renderer] [---play_ref_motion]
As of November 30, 2025, we have open-sourced a generalist model on LAFAN1, daggered from four teachers. This checkpoint was trained with simple domain randomization (DR). You may try deploying it on a Unitree G1 robot using your own deployment code, since we have not yet open-sourced our real-robot deployment pipeline.
-
Train the model:
# Train on a flat terrain: python train_policy.py --exp_name flat_terrain --terrain_type flat_terrain # Train on a rough terrain: python generate_terrain.py # generate various hfield with Perlin noise python train_policy.py --exp_name rough_terrain --terrain_type rough_terrain # For debug mode (quick testing training without logging) python train_policy.py --exp_name debug
-
Evaluate the model First, convert the Brax model checkpoint to PyTorch:
# your_exp_name=<timestamp>_<exp_name> python brax2torch.py --exp_name <your_exp_name>
Next, run the evaluation script:
# your_exp_name=<timestamp>_<exp_name> python play_policy.py --exp_name <your_exp_name> [--use_viewer] [--use_renderer] [---play_ref_motion]
This repository is build upon jax, brax, loco-mujoco, and mujoco_playground.
If you find this repository helpful, please cite our work:
@article{zhang2025track,
title={Track any motions under any disturbances},
author={Zhang, Zhikai and Guo, Jun and Chen, Chao and Wang, Jilong and Lin, Chenghuai and Lian, Yunrui and Xue, Han and Wang, Zhenrong and Liu, Maoqi and Lyu, Jiangran and others},
journal={arXiv preprint arXiv:2509.13833},
year={2025}
}