Yu Zheng*, Boyang Gong*, Fanye Kong, Yueqi Duan, Bingyao Yu, Wenzhao Zheng, Lei Chen, Jiwen Lu, Jie Zhou
*Equal Contribution.
Department of Automation, Tsinghua University
CDAL (Counterfactually Decoupled Attention Learning) is a plug-and-play framework for open-world model attribution that explicitly models causal relationships between visual forgery traces and source models. By extracting factual and counterfactual attention maps and maximizing their causal effect, CDAL effectively isolates model-specific artifacts from source content biases, thus enhancing generalization to unseen generative models with minimal computational overhead.
Please see CDAL-OSMA.
Please see CDAL-OW-DFA.
If you find our work useful for your project, please consider citing our paper.
@inproceedings{zheng2025cdal,
title={Learning Counterfactually Decoupled Attention for Open-World Model Attribution},
author={Yu Zheng and Boyang Gong and Fanye Kong and Yueqi Duan and Bingyao Yu and Wenzhao Zheng and Lei Chen and Jiwen Lu and Jie Zhou
},
booktitle={ICCV},
year={2025}
}
