[2025/08/21] 🎉🎉🎉 Our work has been accepted to EMNLP 2025 Main!
We reveal a response uncertainty phenomenon: across nine datasets, twelve open-source MLLMs overturn a previously correct answer after a single deceptive cue.
- Two-stage evaluation + MUB. We propose a two-stage misleading-instruction evaluation and the misleading rate metric, then curate the Multimodal Uncertainty Benchmark (MUB)—image–question pairs stratified into low / medium / high difficulty by how many of twelve MLLMs they mislead.
- High uncertainty & robustness gains. On 12 open-source + 5 closed-source models, average misleading rates exceed 86% (67.19% explicit, 80.67% implicit). A 2,000-sample mixed-instruction fine-tuning reduces misleading to 6.97% (explicit) and 32.77% (implicit), improves consistency by ~29.37%, and slightly improves accuracy on standard benchmarks.
Before running the code, set up the required environments for: glm , llava, MiniCPM-V, mmstar.
📥 Installation Steps:
- Navigate to the
envfolder. - Download and install the corresponding
.ymlenvironment files:conda env create -f env/glm.yml conda env create -f env/llava.yml conda env create -f env/MiniCPM-V.yml conda env create -f env/mmstar.yml
- Activate the required environment:
conda activate <ENV_NAME>
Download the Multimodal Uncertainty Benchmark (MUB) dataset here.
Extract and place the downloaded images into the extract_img_all folder.
Evaluated Open-source and Close-source Models:
MiniCPM-v-v2; Phi-3-vision; YiVL-6b; Qwen-VL-Chat; Deepseek-VL-7b-Chat; LLaVA-NeXT-7b-vicuna; MiniCPM-Llama3-v2.5; GLM4V-9Bchat; CogVLM-chat; InternVL-Chat-V1-5; LLaVA-Next-34b; Yi-VL-34b; GPT-4o; Gemini-Pro; Claude3-OpusV; Glm-4V
bash MR_test.sh- Open
implicit/misleading_generate/my_tool.py. - Fill in your API Key.
- Run:
bash implicit/misleading_generate/mislead_generate.sh
Use the generated data in implicit/mislead_output:
bash implicit/Implicit_MR_test/implicit_MR_test.shResults are saved in:
- 📁
result/test_dataset_6.jsonl→ Detailed outputs.txt→ Model's Misleading Rate (MR)
- Open
extract2table/extract2table.py - Modify
txt_folder_pathsas needed. - Run:
python extract2table/extract2table.py
- The formatted table is saved in:
📁
extract2table/Tables/
If you use this work, please cite:
@article{dang2024exploring,
title={Exploring response uncertainty in mllms: An empirical evaluation under misleading scenarios},
author={Dang, Yunkai and Gao, Mengxi and Yan, Yibo and Zou, Xin and Gu, Yanggan and Liu, Aiwei and Hu, Xuming},
journal={arXiv preprint arXiv:2411.02708},
year={2024}
}For any issues, please open a GitHub issue or reach out via email: [email protected]



