Skip to content

junyaoshi/ZeroMimic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ZeroMimic: Distilling Robotic Manipulation Skills from Web Videos

Introduction

Authors: Junyao Shi*, Zhuolun Zhao*, Tianyou Wang, Ian Pedroza†, Amy Luo†, Jie Wang, Jason Ma, Dinesh Jayaraman

University of Pennsylvania

ICRA 2025

Corresponding to: Junyao Shi ([email protected])

Project Image

This is the offcial demo code of human wrist action prediction in ZeroMimic. ZeroMimic is a system that distills robotic manipulation skills from egocentric human web videos for diverse zero-shot deployment.

Environment Setup

  1. Create a Conda environment using the environment.yaml file:

    conda env create -f environment.yaml
  2. Activate the newly created environment:

    conda activate zeromimic

Download Checkpoints

TODO

Run Inference

With the Conda environment activated, run the following command to execute inference:

Replace "/path/to/your/checkpoint/folder" with the actual path to your checkpoint folder. For testing examples under the example_data folder, modify the task and example_id in the example.py file in line 10 and 11. The script will generate a video to visualize the action prediction of human hand wrist.

python example.py debug_eval_path="/path/to/your/checkpoint/folder"

Demo Video

Acknowledgement

This codebase is adapted from ACT: Action Chunking with Transformers and Imitation Learning algorithms and Co-training for Mobile ALOHA.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages