Skip to content
This repository was archived by the owner on Feb 7, 2025. It is now read-only.

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

README.md

MONAI Generative Models Tutorials

This directory hosts the MONAI Generative Models tutorials.

Requirements

To run the tutorials, you will need to install the Generative Models package. Besides that, most of the examples and tutorials require matplotlib and Jupyter Notebook.

These can be installed with the following:

python -m pip install -U pip
python -m pip install -U matplotlib
python -m pip install -U notebook

Some of the examples may require optional dependencies. In case of any optional import errors, please install the relevant packages according to MONAI's installation guide. Or install all optional requirements with the following:

pip install -r requirements-dev.txt

List of notebooks and examples

Table of Contents

  1. Diffusion Models
  2. Latent Diffusion Models
  3. VQ-VAE + Transformers

1. Diffusion Models

Image synthesis with Diffusion Models

  • Training a 3D Denoising Diffusion Probabilistic Model: This tutorial shows how to easily train a DDPM on 3D medical data. In this example, we use a downsampled version of the BraTS dataset. We will show how to make use of the UNet model and the Noise Scheduler necessary to train a diffusion model. Besides that, we show how to use the DiffusionInferer class to simplify the training and sampling processes. Finally, after training the model, we show how to use a Noise Scheduler with fewer timesteps to sample synthetic images.

  • Training a 2D Denoising Diffusion Probabilistic Model: This tutorial shows how to easily train a DDPM on medical data. In this example, we use the MedNIST dataset, which is very suitable for beginners as a tutorial.

  • Comparing different noise schedulers: In this tutorial, we compare the performance of different noise schedulers. We will show how to sample a diffusion model using the DDPM, DDIM, and PNDM schedulers and how different numbers of timesteps affect the quality of the samples.

  • Training a 2D Denoising Diffusion Probabilistic Model with different parameterisation: In MONAI Generative Models, we support different parameterizations for the diffusion model (epsilon, sample, and v-prediction). In this tutorial, we show how to train a DDPM using the v-prediction parameterization, which improves the stability and convergence of the model.

  • Training a 2D DDPM using Pytorch Ignite: Here, we show how to train a DDPM on medical data using Pytorch Ignite. We will show how to use the DiffusionPrepareBatch to prepare the model inputs and MONAI's SupervisedTrainer and SupervisedEvaluator to train DDPMs.

  • Using a 2D DDPM to inpaint images: In this tutorial, we show how to use a DDPM to inpaint of 2D images from the MedNIST dataset using the RePaint method.

  • Generating conditional samples with a 2D DDPM using classifier-free guidance: This tutorial shows how easily we can train a Diffusion Model and generate conditional samples using classifier-free guidance in the MONAI's framework.

  • Training Diffusion models with Distributed Data Parallel: This example shows how to execute distributed training and evaluation based on PyTorch native DistributedDataParallel module with torch.distributed.launch.

Anomaly Detection with Diffusion Models

2. Latent Diffusion Models

Image synthesis with Latent Diffusion Models

  • Training a 3D Latent Diffusion Model: This tutorial shows how to train a LDM on 3D medical data. In this example, we use the BraTS dataset. We show how to train an AutoencoderKL and connect it to an LDM. We also comment on the importance of the scaling factor in the LDM used to scale the latent representation of the AEKL to a suitable range for the diffusion model. Finally, we show how to use the LatentDiffusionInferer class to simplify the training and sampling.

  • Training a 2D Latent Diffusion Model: This tutorial shows how to train an LDM on medical on the MedNIST dataset. We show how to train an AutoencoderKL and connect it to an LDM.

  • Training Autoencoder with KL-regularization: In this section, we focus on training an AutoencoderKL on 2D and 3D medical data, that can be used as the compression model used in a Latent Diffusion Model.

Super-resolution with Latent Diffusion Models

  • Super-resolution using Stable Diffusion Upscalers method: In this tutorial, we show how to perform super-resolution on 2D images from the MedNIST dataset using the Stable Diffusion Upscalers method. In this example, we will show how to condition a latent diffusion model on a low-resolution image as well as how to use the DiffusionModelUNet's class_labels conditioning to condition the model on the level of noise added to the image (aka "noise conditioning augmentation")

3. VQ-VAE + Transformers

Image synthesis with VQ-VAE + Transformers

  • Training a 2D VQ-VAE + Autoregressive Transformers: This tutorial shows how to train a Vector-Quantized Variation Autoencoder + Transformers on the MedNIST dataset.

  • Training VQ-VAEs and VQ-GANs: In this section, we show how to train Vector Quantized Variation Autoencoder (on 2D and 3D data) and show how to use the PatchDiscriminator class to train a VQ-GAN and improve the quality of the generated images.

Anomaly Detection with VQ-VAE + Transformers