This app has example models to do both interactive and automated segmentation over radiology (3D) images. Including auto segmentation with the latest deep learning models (e.g., UNet, UNETR) for multiple abdominal organs. Interactive tools include DeepEdit and Deepgrow for actively improving trained models and deployment.
- Supported Viewers
- Pretrained Models
- How To Use the App
- Hybrid Radiology App with Models and Bundles
- Model Details
The Radiology Sample Application supports the following viewers:
For more information on each of the viewers, see the plugin extension folder for the given viewer.
The following are the models which are currently added into Radiology App:
| Name | Description |
|---|---|
| deepedit | This model is based on DeepEdit: an algorithm that combines the capabilities of multiple models into one, allowing for both interactive and automated segmentation. |
| deepgrow | This model is based on DeepGrow which allows for an interactive segmentation. |
| segmentation | A standard (non-interactive) multilabel [spleen, kidney, liver, stomach, aorta, etc..] model using UNET to label 3D volumes. |
| segmentation_spleen | It uses pre-trained weights/model (UNET) from NVIDIA Clara for spleen segmentation. |
| Multistage Vertebra Segmentation | This is an example of a multistage approach for segmenting several structures on a CT image. |
The following commands are examples of how to start the Radiology Sample Application. Make sure when you're running the command that you use the correct app and studies path for your system.
# Download Radiology App (skip this if you have already downloaded the app or using github repository (dev mode))
monailabel apps --download --name radiology --output workspace
# Start MONAI Label Server with the DeepEdit model
monailabel start_server --app workspace/radiology --studies workspace/images --conf models deepedit
# Start MONAI Label Server with multiple model
monailabel start_server --app workspace/radiology --studies workspace/images --conf models "deepgrow_2d,deepgrow_3d,segmentation"
# Start MONAI Label Server with all stages for vertebra segmentation
monailabel start_server --app workspace/radiology --studies workspace/images --conf models "localization_spine,localization_vertebra,segmentation_vertebra"
# Start MONAI Label Server with DeepEdit model and preload on GPU
monailabel start_server --app workspace/radiology --studies workspace/images --conf models deepedit --conf preload true
# Start MONAI Label Server with DeepEdit in Inference Only mode
monailabel start_server --app workspace/radiology --studies workspace/images --conf models deepedit --conf skip_trainers trueRadiology app now supports loading models from local or from bundles in MONAI Model Zoo
# Example: Pick two models of spleen and multi-organ segmentation model, and two model-zoo bundles.
monailabel start_server \
--app workspace/radiology \
--studies workspace/images \
--conf models segmentation_spleen,segmentation \
--conf bundles spleen_ct_segmentation_v0.2.0,swin_unetr_btcv_segmentation_v0.2.0DeepEdit is an algorithm that combines the capabilities of multiple models into one, allowing for both interactive and automated segmentation.
This model works for single and multiple label segmentation tasks.
monailabel start_server --app workspace/radiology --studies workspace/images --conf models deepedit
- Additional Configs (pass them as --conf name value) while starting MONAILabelServer
| Name | Values | Description |
|---|---|---|
| network | dynunet, unetr | Using one of these network and corresponding pretrained weights |
| use_pretrained_model | true, false | Disable this NOT to load any pretrained weights |
| skip_scoring | true, false | Disable this to allow scoring methods to be used |
| skip_strategies | true, false | Disable this to add active learning strategies |
| epistemic_enabled | true, false | Enable Epistemic based Active Learning Strategy |
| epistemic_samples | int | Limit number of samples to run epistemic scoring |
| preload | true, false | Preload model into GPU |
A command example to use active learning strategies with DeepEdit would be:
monailabel start_server --app workspace/radiology --studies workspace/images --conf models deepedit --conf skip_scoring false --conf skip_strategies false --conf epistemic_enabled true
-
Network: This model uses the DynUNet as the default network. It also comes with pretrained model for UNETR. Researchers can define their own network or use one of the listed here
-
Labels:
{ "spleen": 1, "right kidney": 2, "left kidney": 3, "liver": 6, "stomach": 7, "aorta": 8, "inferior vena cava": 9, "background": 0 } -
Dataset: The model is pre-trained over dataset: https://www.synapse.org/#!Synapse:syn3193805/wiki/217789
-
Inputs
- 1 channel for the image modality -> Automated mode
- 1+N channels (image modality + points for N labels including background) -> Interactive mode
-
Output: N channels representing the segmented organs/tumors/tissues
Deepgrow is an algorithm that combines the capabilities of multiple models into one, allowing interactive segmentation based on foreground/background clicks (https://arxiv.org/abs/1903.08205). It uses pre-trained weights from NVIDIA Clara.
It provides both 2D and 3D version to annotate images. Additionally, it also provides DeepgrowPipeline (infer only) that combines best results of 3D and 2D results. Deepgrow 2D model trains faster with higher accuracy compared to Deepgrow 3D model.
The labels get flattened as part of pre-processing step and the model is trained over binary labels. As an advantage, you can feed in any new labels the model dynamically (zero code change) and expect the model to learn on new organ.
monailabel start_server --app workspace/radiology --studies workspace/images --conf models deepgrow_2d,deepgrow_3d
- Additional Configs (pass them as --conf name value) while starting MONAILabelServer
| Name | Values | Description |
|---|---|---|
| preload | true, false | Preload model into GPU |
- Network: This App uses the BasicUNet as the default network.
- Labels:
[ "spleen", "right kidney", "left kidney", "gallbladder", "esophagus", "liver", "stomach", "aorta", "inferior vena cava", "portal vein and splenic vein", "pancreas", "right adrenal gland", "left adrenal gland" ]
NOTE:: You can feed any new labels to the network to learn on new organs/tissues etc..
- Dataset: The model is pre-trained over dataset: https://www.synapse.org/#!Synapse:syn3193805/wiki/217789
- Inputs: 3 channel that represents image + foreground clicks + background clicks
- Output: 1 channel representing the segmented organs/tumors/tissues
Segmentation is a model based on UNet for automated segmentation. This model works for single and multiple label segmentation tasks.
monailabel start_server --app workspace/radiology --studies workspace/images --conf models segmentation
- Additional Configs (pass them as --conf name value) while starting MONAILabelServer
| Name | Values | Description |
|---|---|---|
| use_pretrained_model | true, false | Disable this NOT to load any pretrained weights |
| preload | true, false | Preload model into GPU |
| scribbles | true, false | Don't load the scribble models, useful for user studies |
- Network: This model uses the UNet as the default network. Researchers can define their own network or use one of the listed here
- Labels
{ "spleen": 1, "right kidney": 2, "left kidney": 3, "gallbladder": 4, "esophagus": 5, "liver": 6, "stomach": 7, "aorta": 8, "inferior vena cava": 9, "portal vein and splenic vein": 10, "pancreas": 11, "right adrenal gland": 12, "left adrenal gland": 13 } - Dataset: The model is pre-trained over dataset: https://www.synapse.org/#!Synapse:syn3193805/wiki/217789
- Inputs: 1 channel for the image modality
- Output: N channels representing the segmented organs/tumors/tissues
Segmentation Spleen is a model based on UNet for automated segmentation for single label spleen. It uses pre-trained weights from NVIDIA Clara.
This is the simple reference for users to add their simple model to the Radiology App.
monailabel start_server --app workspace/radiology --studies workspace/images --conf models segmentation_spleen
- Additional Configs (pass them as --conf name value) while starting MONAILabelServer
| Name | Values | Description |
|---|---|---|
| use_pretrained_model | true, false | Disable this NOT to load any pretrained weights |
| skip_scoring | true, false | Disable this to allow scoring methods to be used |
| skip_strategies | true, false | Disable this to add active learning strategies |
| epistemic_enabled | true, false | Enable Epistemic based Active Learning Strategy |
| epistemic_samples | int | Limit number of samples to run epistemic scoring |
| preload | true, false | Preload model into GPU |
A command example to use active learning strategies with segmentation_spleen would be:
monailabel start_server --app workspace/radiology --studies workspace/images --conf models segmentation_spleen --conf skip_scoring false --conf skip_strategies false --conf epistemic_enabled true
- Network: This App uses the UNet as the default network.
- Labels:
{ "Spleen": 1 } - Dataset: The model is pre-trained over dataset: http://medicaldecathlon.com/
- Inputs: 1 channel for the image modality
- Output: 1 channels representing the segmented spleen
Multistage Vertebra Segmentation is an example of a multistage approach for segmenting several structures on a CT image. The model has three stages that can be use together or independently.
Stage 1: Spine Localization
As the name suggests, this stage localizes the spine as a single label. See the following image:

Stage 2: Vertebra Localization
This images uses the ouput of the first stage, crop the volume around the spine and roughly segments the vertebras.
Stage 3: Vertebra Segmentation
Finally, this stage takes the output of the second stage, compute the centroids and then segments each vertebra at a time. See the folloiwng image:

The difference between second and third stage is that third stage get a more fine segmentation of each vertebra.
monailabel start_server --app workspace/radiology --studies workspace/images --conf models localization_spine,localization_vertebra,segmentation_vertebra
- Additional Configs (pass them as --conf name value) while starting MONAILabelServer
| Name | Values | Description |
|---|---|---|
| use_pretrained_model | true, false | Disable this NOT to load any pretrained weights |
- Network: This App uses the UNet as the default network.
- Labels:
{ "C1": 1, "C2": 2, "C3": 3, "C4": 4, "C5": 5, "C6": 6, "C7": 7, "Th1": 8, "Th2": 9, "Th3": 10, "Th4": 11, "Th5": 12, "Th6": 13, "Th7": 14, "Th8": 15, "Th9": 16, "Th10": 17, "Th11": 18, "Th12": 19, "L1": 20, "L2": 21, "L3": 22, "L4": 23, "L5": 24 } - Dataset: The model is pre-trained over VerSe dataset: https://github.com/anjany/verse
- Inputs: 1 channel for the CT image
- Output: N channels representing the segmented vertebras