Skip to content

Junchao-cs/Edit360

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Edit360: 2D Image Edits to 3D Assets from Any Angle

ICCV 2025 (Highlight)

Junchao Huang1 Xinting Hu2 Shaoshuai Shi4 Zhuotao Tian3 Li Jiang1,†

1 The Chinese University of Hong Kong, Shenzhen
2 Nanyang Technological University
3 Harbin Institute of Technology, Shenzhen
4 Voyager Research, Didi Chuxing

demo.mp4

Installation

conda create -n edit360 python=3.10
conda activate edit360
pip install -r requirements/pt2.txt

Preparation

First, clone the repository and enter the project directory:

git clone https://github.com/Junchao-cs/Edit360.git
cd Edit360

Then, create a folder to store model checkpoints:

mkdir checkpoints

Download the sv3d_u.safetensors checkpoint from Hugging Face and place it in the checkpoints directory.

Quick start

python demo.py

Inference

We provide two inference modes:

Single Image Input : Generate a 21-frame 3D orbit video from a single input image (same as sv3d).

python scripts/sampling/simple_video_sample.py \
    --mode one \
    --input_path path/to/front_view.png \ # input image path
    --version sv3d_u \
    --output_folder_mp4 ./outputs/mp4 \ # video output directory
    --output_folder_img ./outputs/img \ # frame images output directory
    --seed 23 

Dual Image Input : Generate a 21-frame 3D orbit video from a front view and an anchor view (edited view). Anchor views can be generated by mode one above.

python scripts/sampling/simple_video_sample.py \
    --mode two \
    --input_path_f path/to/front_view.png \ # input front view path
    --input_path_b path/to/anchor_view.png \ # input anchor view path (e.g. back view)
    --version sv3d_u \
    --output_folder_mp4 ./outputs/mp4 \ # video output directory
    --output_folder_img ./outputs/img \ # frame images output directory
    --anchor_view_angle 180 \ # horizontal rotation angle of the anchor view (e.g. 180 for back view), specified as an integer between 0 and 360
    --seed 23

Note: If the anchor view was generated using mode one (sv3d) and corresponds to a specific frame index (e.g., frame 11 out of 21), you can directly specify its index using the --path_b_num argument:

python scripts/sampling/simple_video_sample.py \
    --mode two \
    --input_path_f path/to/front_view.png \ # input front view path
    --input_path_b path/to/anchor_view.png \ # input anchor view path (e.g. back view)
    --version sv3d_u \
    --output_folder_mp4 ./outputs/mp4 \ # video output directory
    --output_folder_img ./outputs/img \ # frame images output directory
    --path_b_num 11 \ # frame index of the anchor view (1~21)
    --seed 23

Useful Tools

Jimeng AI : Text-to-image, image editing and style transformation tool

Adobe Express : Image editing platform

Citation

If you find our work helpful, please cite:

@inproceedings{huang2025edit360,
  title={Edit360: 2D Image Edits to 3D Assets from Any Angle},
  author={Huang, Junchao and Hu, Xinting and Shi, Shaoshuai and Tian, Zhuotao and Jiang, Li},
  booktitle={ICCV},
  year={2025}
}

Acknowledgements

  • SV3D: Our model architecture is based on SV3D. We also use pretrained network weight from it.
  • Tailor3D: Our approach extends Tailor3D's pioneering 3D asset editing framework by adapting it for arbitrary viewpoint editing through video diffusion models.
  • V3D: We use V3D for 3DGS reconstruction.

About

[ICCV 2025 Highlight] "Edit360: 2D Image Edits to 3D Assets from Any Angle"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages