Skip to content

farzad-bz/MAD-AD

Repository files navigation

✨ MAD-AD ✨

A PyTorch Implementation for Unsupervised Brain Anomaly Detection

This repository hosts the official PyTorch implementation for our paper accepted in IPMI2025:
"MAD-AD: Masked Diffusion for Unsupervised Brain Anomaly Detection".


🎨 Approach

MAD-AD Method


⚙️ Setup

🛠️ Environment

Our experiments run on Python 3.11. Install all the required packages by executing:

pip3 install -r requirements.txt

📁 Datasets

Prepare your data as follows:

  1. Data Registration & Preprocessing:

    • Register with MNI_152_1mm.
    • Preprocess, normalize, and extract axial slices.
  2. Dataset Organization:

    • Ensure training and validation sets contain only normal, healthy data.
    • Test set should include abnormal slices.
    • Organize your files using this structure:
    ├── Data
        ├── train
        │   ├── {train_image_id}-slice_{slice_idx}-{modality}.png
        │   ├── {train_image_id}-slice_{slice_idx}-brainmask.png
        │   └── ...
        ├── val
        │   ├── {val_image_id}-slice_{slice_idx}-{modality}.png
        │   ├── {val_image_id}-slice_{slice_idx}-brainmask.png
        │   └── ...
        └── test
            ├── {test_image_id}-slice_{slice_idx}-{modality}.png
            ├── {test_image_id}-slice_{slice_idx}-brainmask.png
            ├── {test_image_id}-slice_{slice_idx}-segmentation.png
            └── ...
    

🔧 Pretrained Weights & VAE Fine-Tuning

Pretrained VAE Models

To jumpstart your experiments, we provide pretrained weights adapted for 1-channel medical brain images. These models are available on HuggingFace.

Train & Fine-Tune VAE

If you prefer to train your own VAE from scratch, please refer to the LDM-VAE repository for detailed instructions.


🚄 Training MAD-AD

To train MAD-AD, run the following command. This configuration leverages a UNet_L model with data augmentation and integrates the pretrained VAE:

torchrun train_MAD_AD.py \
            --modality T1 \
            --model UNet_L \
            --mask-ratio 0.75 \
            --image-size 256 \
            --augmentation True \
            --data_root ./data/ \
            --ckpt-every 20 

🚦 Evaluating MAD-AD

To evaluate MAD-AD model, use the following command. Note: evaluate-MAD-AD.py loads its configuration and arguments from the YAML file located in the parent directory of the given model checkpoint path. The script computes four evaluation metrics and saves per-image visualizations in the parent folder of the model path:

torchrun evaluate-MAD-AD.py \
            --data-root ./data/ \
            --model-path ./MAD-AD_T2_UNet_L/001-UNet_L/checkpoints/last.pt

📸 Sample Results

Sample Results


📚 Citation & Reference

If you find MAD-AD useful in your research, please cite our work:

@inproceedings{beizaee2025mad,
  title={MAD-AD: Masked Diffusion for Unsupervised Brain Anomaly Detection},
  author={Beizaee, Farzad and Lodygensky, Gregory and Desrosiers, Christian and Dolz, Jose},
  booktitle={International Conference on Information Processing in Medical Imaging},
  pages={139--153},
  year={2025},
  organization={Springer}
}

About

codes for paper accepted in IPMI2025 titled: ''MAD-AD: Masked Diffusion Autoencoder for Unsupervised Brain Anomaly Detection"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors