Skip to content

wz0919/AnchorWeave

Repository files navigation

AnchorWeave: World-Consistent Video Generation with Retrieved Local Spatial Memories

Project Website Paper Code

Zun Wang1, Han Lin1, Jaehong Yoon2, Jaemin Cho3, Yue Zhang1, Mohit Bansal1

1University of North Carolina, Chapel Hill · 2NTU Singapore · 3AI2

Abstract

Maintaining spatial world consistency over long horizons remains a central challenge for camera-controllable video generation. Existing memory-based approaches often condition generation on globally reconstructed 3D scenes by rendering anchor videos from the reconstructed geometry in the history. However, reconstructing a global 3D scene from multiple views inevitably introduces cross-view misalignment, as pose and depth estimation errors cause the same surfaces to be reconstructed at slightly different 3D locations across views. When fused, these inconsistencies accumulate into noisy geometry that contaminates the conditioning signals and degrades generation quality.

We introduce AnchorWeave, a memory-augmented video generation framework that replaces a single misaligned global memory with multiple clean local geometric memories and learns to reconcile their cross-view inconsistencies. To this end, AnchorWeave performs coverage-driven local memory retrieval aligned with the target trajectory and integrates the selected local memories through a multi-anchor weaving controller during generation. Extensive experiments demonstrate that AnchorWeave significantly improves long-term scene consistency while maintaining strong visual quality, with ablation and analysis studies further validating the effectiveness of local geometric conditioning, multi-anchor control, and coverage-driven retrieval.


TODO

  • CogVideoX training code
  • Processed training data
  • Data processing scripts
  • Inference example
  • Wan-based code

Setup

1. Clone & Environment

git clone https://github.com/wz0919/AnchorWeave.git
cd AnchorWeave
conda create -n anchorweave python=3.10
conda activate anchorweave
pip install -r requirements.txt

2. Download Models

Place CogVideoX-5B-I2V under ./pretrained/CogVideoX-5b-I2V/:

# Download from https://github.com/THUDM/CogVideo
mkdir -p pretrained
# Place CogVideoX-5b-I2V folder in pretrained/

Training

Edit scripts/train_with_latent.sh to set video_root_dir and output_dir, then:

# Edit GPU config in training/accelerate_config_machine.yaml (num_processes)
bash scripts/train_with_latent.sh

Inference

# Place example dataset under ./data/example_dataset, then:
bash scripts/inference.sh

Acknowledgements


Citation

@article{anchorweave2025,
  title={AnchorWeave: World-Consistent Video Generation with Retrieved Local Spatial Memories},
  author={Wang, Zun and Lin, Han and Yoon, Jaehong and Cho, Jaemin and Zhang, Yue and Bansal, Mohit},
  journal={arXiv preprint arXiv:2602.14941},
  year={2025}
}

About

Official implementation of AnchorWeave: World-Consistent Video Generation with Retrieved Local Spatial Memories

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors