Develop Tools & CodePaper and LLMs

Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation

Vision transformers are effective deep learning models for vision tasks, including medical image segmentation.

Tags:

Pricing Type

  • Pricing Type: Free
  • Price Range Start($):

GitHub Link

The GitHub link is https://github.com/liamchalcroft/mdunet

Introduce

The GitHub repository “MDUNet” by liamchalcroft presents a U-Net model that utilizes a matrix decomposition framework proposed by Guo et al. in their paper “Visual Attention Network.” The code was initially derived from NVIDIA’s nnUNet implementation and was created for submission to the MICCAI 2022 BrainLes ISLES and ATLAS challenges. The repository’s structure and format are closely aligned with NVIDIA’s implementation, and users are advised to refer to their documentation for guidance.

 

Content

This repo contains code used for submission to the MICCAI 2022 BrainLes ISLES and ATLAS challenges. The repo was originally a fork of the NVIDIA nnUNet implementation (https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Segmentation/nnUNet) and so the structure / format is heavily based on this, and it is recommended to use their documentation as a first point of contact.


Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation

Related