Skip to content

Official implementation for the paper"Towards Understanding How Knowledge Evolves in Large Vision-Language Models"

License

Notifications You must be signed in to change notification settings

XIAO4579/Vlm-interpretability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards Understanding How Knowledge Evolves in Large Vision-Language Models

Code for the CVPR 2025 paper "Towards Understanding How Knowledge Evolves in Large Vision-Language Models"

Overview

path Large Vision-Language Models (LVLMs) are gradually becoming the foundation for many artificial intelligence applications. However, understanding their internal working mechanisms has continued to puzzle researchers, which in turn limits the further enhancement of their capabilities. In this paper, we seek to investigate how multimodal knowledge evolves and eventually induces natural languages in LVLMs. We design a series of novel strategies for analyzing internal knowledge within LVLMs, and delve into the evolution of multimodal knowledge from three levels, including single token probabilities, token probability distributions, and feature encodings. In this process, we identify two key nodes in knowledge evolution: the critical layers and the mutation layers, dividing the evolution process into three stages: rapid evolution, stabilization, and mutation. Our research is the first to reveal the trajectory of knowledge evolution in LVLMs, providing a fresh perspective for understanding their underlying mechanisms.

Setup

The environment should match the model you intend to analyze. For instance, if you are using LLaVA-1.5, please set up your environment following the official guidelines in the LLaVA GitHub repository: https://github.com/haotian-liu/LLaVA.

Experiments

For every open-source vlm model, please add

output_hidden_states=True
return_dict_in_generate=True

in the generate function. See more details in Jupyter Notebook codes.

To get the heatmap image, please run heatmap.py Amber25_0

To get the token line chart image, please run plot_token_probabilities_area.py Amber25_0

To get the simple tsne image, please run tsne.py Amber25_0

To get the combined tsne image, please run combined_tsne.py Amber25_0

Citation

If you find our project interesting, we hope you can star our repo and cite our paper as follows:

@article{wang2025towards,
  title={Towards Understanding How Knowledge Evolves in Large Vision-Language Models},
  author={Wang, Sudong and Zhang, Yunjian and Zhu, Yao and Li, Jianing and Wang, Zizhe and Liu, Yanwei and Ji, Xiangyang},
  journal={arXiv preprint arXiv:2504.02862},
  year={2025}
}

About

Official implementation for the paper"Towards Understanding How Knowledge Evolves in Large Vision-Language Models"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •