🚀DiCache: Let Diffusion Model Determine Its Own Cache

Jiazi Bu1,5* Pengyang Ling2,5* Yujie Zhou1,5* Yibin Wang3,6

Yuhang Zang5 Dahua Lin4,5,7 Jiaqi Wang5,6†

1 Shanghai Jiao Tong University    2 University of Science and Technology of China    3 Fudan University
4 The Chinese University of Hong Kong    5 Shanghai Artificial Intelligence Laboratory
6 Shanghai Innovation Institute    7 CPII under InnoHK
(* Equal Contribution   † Corresponding Author)

[Paper]      [Code]


Demo Video for DiCache

Abstract

Recent years have witnessed the rapid development of acceleration techniques for diffusion models, especially caching-based acceleration methods. These studies seek to answer two fundamental questions: "When to cache" and "How to use cache", typically relying on predefined empirical laws or dataset-level priors to determine caching timings and adopting handcrafted rules for multi-step cache utilization. However, given the highly dynamic nature of the diffusion process, they often exhibit limited generalizability and fail to cope with diverse samples. In this paper, a strong sample-specific correlation is revealed between the variation patterns of the shallow-layer feature differences in the diffusion model and those of deep-layer features. Moreover, we have observed that the features from different model layers form similar trajectories. Based on these observations, we present DiCache, a novel training-free adaptive caching strategy for accelerating diffusion models at runtime, answering both when and how to cache within a unified framework. Specifically, DiCache is composed of two principal components: (1) Online Probe Profiling Scheme leverages a shallow-layer online probe to obtain an on-the-fly indicator for the caching error in real time, enabling the model to dynamically customize the caching schedule for each sample. (2) Dynamic Cache Trajectory Alignment adaptively approximates the deep-layer feature output from multi-step historical caches based on the shallow-layer feature trajectory, facilitating higher visual quality. Extensive experiments validate DiCache's capability in achieving higher efficiency and improved fidelity over state-of-the-art approaches on various leading diffusion models including WAN 2.1, HunyuanVideo and Flux. Our code is available at DiCache Repo.

Observation and Motivation

(1) For a given sampling process, the difference in shallow-layer features strongly correlates with that in deep-layer features on a sample-specific basis, enabling them to serve as an on-the-fly proxy for the final model output evolution.


(2) The features from different DiT blocks form similar trajectories, which allows for dynamically extrapolating the deep-layer feature output through combining multi-step historical caches based on the shallow-layer probe feature trajectory.

Methodology

DiCache consists of Online Probe Profiling Strategy and Dynamic Cache Trajectory Alignment. The former dynamically determines the caching timing with an online shallow-layer probe at runtime, while the latter combines multi-step caches based on the probe feature trajectory to adaptively approximate the feature at the current timestep. By integrating the above two techniques, DiCache intrinsically answers when and how to cache in a unified framework.

Qualitative Comparison

Qualitative comparisons with existing caching-based methods. DiCache consistently outperforms the baselines in terms of both visual quality and similarity to the original results across diverse scenarios and generation backbones.

Quantitative Evaluation

Quantitative assessments of the proposed DiCache and other baselines. Unlike existing methods, DiCache dynamically determines its caching timings and effectively utilizes multi-step caches based on online probes, achieving a unification of rapid inference speed and high visual fidelity. "OOM" indicates CUDA out of memory on the A800 80GB GPU.

BibTex

If you find this work helpful, please cite the following paper:

    @article{bu2025dicache,
      title={DiCache: Let Diffusion Model Determine Its Own Cache},
      author={Bu, Jiazi and Ling, Pengyang and Zhou, Yujie and Wang, Yibin and Zang, Yuhang and Wu, Tong and Lin, Dahua and Wang, Jiaqi},
      journal={arXiv preprint arXiv:2508.17356},
      year={2025}
    }
  

Project page template is borrowed from FreeScale.