Zihan Su1,
Xuerui Qiu2,
Hongbin Xu3,
Tangyu Jiang1,
Junhao Zhuang1,
Chun Yuan1†,
Ming Li4†,
Shengfeng He5,
Fei Richard Yu4
1 Tsinghua University
2 Institute of Automation, Chinese Academy of Sciences
3 South China University of Technology
4 Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
5 Singapore Management University
†Corresponding Author
- [09/19] 🚀 🚀 Code Released!
- [09/18] 🎉 🎉 Safa-Sora is accepted by NeurIPS 2025!
- [05/23] Initial Preview Release 🔥 Coming Soon!
Safe-Sora is the first framework that integrates graphical watermarks directly into the video generation process.
The following results show the original video, the watermarked video, the difference between them (×5), the original watermark, the recovered watermark, and the difference between them (×5).
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Download the files and place them in the root directory.
- checkpoints contains the pretrained weights for Safe-Sora, VideoCrafter2, the VAE, and the 3D-CNN (simulating H.264 compression).
- dataset contains the Logo-2K dataset and the Panda-70M dataset.
- mamba is provided for setting up the environment.
- causal-conv1d is provided for setting up the environment.
To install requirements:
conda create -n safe-sora python=3.9
conda activate safe-sora
conda install pytorch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install packaging ninja==1.11.1.1
cd causal-conv1d
python setup.py install
cd ../mamba
python setup.py install
cd ..
pip install -r requirements.txt
To train Safe-Sora, run this command:
bash train.sh
We use DDP to train Safe-Sora. You can modify the parameters in train.sh to select which GPUs to use.
To evaluate our models, run:
bash test.sh
Our code is based on these awesome repos:
If you find our repo helpful, please consider leaving a star or cite our paper :)
@article{su2025safe,
title={Safe-Sora: Safe Text-to-Video Generation via Graphical Watermarking},
author={Su, Zihan and Qiu, Xuerui and Xu, Hongbin and Jiang, Tangyu and Zhuang, Junhao and Yuan, Chun and Li, Ming and He, Shengfeng and Yu, Fei Richard},
journal={arXiv preprint arXiv:2505.12667},
year={2025}
}






