FOA-Attack is proposed to enhance adversarial transferability in multimodal large language models by optimizing both global and local feature alignments using cosine similarity and optimal transport.
- [2025-09-19] Our paper has been accepted to NeurIPS 2025! 🎉
- [2025-05-29] We release the FOA-Attack code! 🚀
Dependencies: To install requirements:
pip install -r requirements.txtpython generate_adversarial_samples_foa_attack.py
python blackbox_text_generation.py -m blackbox.model_name=gpt4o,claude,gemini
python gpt_evaluate.py -m blackbox.model_name=gpt4o,claude,gemini
python keyword_matching_gpt.py -m blackbox.model_name=gpt4o,claude,geminipython FOAttack.pyThis project is built on M-Attack. We sincerely thank them for their outstanding work.