Project Page | Paper | Video
Authors: Junyao Shi*, Zhuolun Zhao*, Tianyou Wang, Ian Pedroza†, Amy Luo†, Jie Wang, Jason Ma, Dinesh Jayaraman
University of Pennsylvania
ICRA 2025
Corresponding to: Junyao Shi ([email protected])
This is the offcial demo code of human wrist action prediction in ZeroMimic. ZeroMimic is a system that distills robotic manipulation skills from egocentric human web videos for diverse zero-shot deployment.
-
Create a Conda environment using the
environment.yamlfile:conda env create -f environment.yaml
-
Activate the newly created environment:
conda activate zeromimic
TODO
With the Conda environment activated, run the following command to execute inference:
Replace "/path/to/your/checkpoint/folder" with the actual path to your checkpoint folder.
For testing examples under the example_data folder, modify the task and example_id in the example.py file in line 10 and 11.
The script will generate a video to visualize the action prediction of human hand wrist.
python example.py debug_eval_path="/path/to/your/checkpoint/folder"This codebase is adapted from ACT: Action Chunking with Transformers and Imitation Learning algorithms and Co-training for Mobile ALOHA.

