Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation
Vision transformers are effective deep learning models for vision tasks, including medical image segmentation.
Tags:Paper and LLMsPricing Type
- Pricing Type: Free
- Price Range Start($):
GitHub Link
The GitHub link is https://github.com/liamchalcroft/mdunet
Introduce
The GitHub repository “MDUNet” by liamchalcroft presents a U-Net model that utilizes a matrix decomposition framework proposed by Guo et al. in their paper “Visual Attention Network.” The code was initially derived from NVIDIA’s nnUNet implementation and was created for submission to the MICCAI 2022 BrainLes ISLES and ATLAS challenges. The repository’s structure and format are closely aligned with NVIDIA’s implementation, and users are advised to refer to their documentation for guidance.
Content
This repo contains code used for submission to the MICCAI 2022 BrainLes ISLES and ATLAS challenges. The repo was originally a fork of the NVIDIA nnUNet implementation (https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Segmentation/nnUNet) and so the structure / format is heavily based on this, and it is recommended to use their documentation as a first point of contact.

Related
However, due to the unavailability of experts in these locations, the data has to be transferred to an urban healthcare facility (AMD and glaucoma) or a terrestrial station (e. g, SANS) for more precise disease identification.
State-of-the-art solutions adopt the DETR-like framework, and mainly develop the complex decoder, e. g., regarding pose estimation as keypoint box detection and combining with human detection in ED-Pose, hierarchically predicting with pose decoder and joint (keypoint) decoder in PETR.










