- It was a cv + medical competition, where we given RLE data for different neuronal cell [shsy5y, cort and astro] segmentation. For details explanation of the data and cell types you can check out my NB here.
- It was my first competition on kaggle. I previously did one melanoma classification in my 2nd year but it was very overwhelming for me. So I resigned early. But this time I was there will the last hour of the competition.
- I was not able to get any medal in this competition, but luckily we were able to get into top 15% of the LB. It also helped me understand how to approach a kaggle problem. Except that another good thing that happened was one of my teammate became Notebook and Discussion Expert.
- I started 4 days late, at first I tried to understand the data, evaluation matric etc. I started with creating a Unet based model with Attention and residual connection, but it did not really give any good results although the inference was looking very neat. I then moved on to Pretrained Unet model with different backbones [resnet,efficientnet-b2]. Still it was not good enough [0.15+].
- When I was doing these experiments people on kaggle converged on a best performing single model, which was Mask RCNN. So I started experimenting with Mask RCNN. We did not have any clue about what is cross validation what are different pre and post processing techniques we can use etc. But one of my teammates r-matsuzaka was researching on different methods to use. As per his advice I trained MS RCNN [which suppoed to outperform MRCNN] It was giving good results but not as good as the high performing MRCNN models[performances shared in the forums].
- Another breakthrough came to the picture when Slawek Biel shared his NB on cellpose. It outperformed MRCNN. We started to shift in cellpose. after 4-5 days of grinding with cellpose we came to a conclusion that the provided github repo for cellpose does not work.
- As it was already near deadline I started doing Ensemble. I had no clue how to do that, but people already shared some NBs on ensemble based on NMS, NMW. I hacked some of the code to make it work for my models. At the end I used some of my models and some of high scoring public models to do the ensumble. We were able so submit only 52 submissions. It was because of some of the frameworks were new to us, we got distracted with other competitions, less experience, very less experiment tracking etc.
- Instance Segmentation Models
- Top GitHub Source Codes on Cell Instance Segmentation
- Previous Competitions on Instance Segmentation
- Relevant past Comps and its Top solutions + extra
- attention base UNET
- MRCNN using detectron2 / or using pytorch
- semantic segmentation using ResNet + Xgboost
- Experiment with the weights of MS RCNN
- Experimenting with previously downloaded model files
- Perform Ensemble with different models
- Use Detectron2 for MS RCNN [might not be a good idea]
- Add TTA with MS RCNN
- Use modified data for trianing.
- train PointRend with Detectron2.
- Use MRCNN with ResNet101 Detectron2.
- Do inference on MRCNN Detectron2.
- Train cellpose on cyto2 with high epoch,
- Do inference on cellpose.
- increase the data using augmentation.
- try cellpose
- Use this cellpose for doing the cellpose training. Modified cellpose.
- change the threshold
- right now(only 6th epoch) its showing -ve corelation with LB. checking the 5th, if that turns out the same we need to assume the same that the cv is -ve corelated with LB. and check by submitting one.
- I will make the MS RCNN(with high epoch), PointRend, Cellpose in training and then start working on what other combination, we can use that for ensambling and TTA.
- Go back to MSRCNN/CASCADE MSRCNN with folds.
- hyperparameter tune the model config
| Model/changes | NB version | LB | standing |
|---|---|---|---|
| AttentionResUNet | Notebook Attention based Residual Unet + EDA on cell imgs🧬 (Version 25) | 0.036 LB |
622th |
| AttentionResUNet with TTA | Notebook checking out the Overlapping problem ( Version 5) | 0,053 LB |
708th |
| MMDetection- MS RCNN | MMDetection Neuron Inference (V1) [5th final epoch] | 0.233 |
NA |
| Detectron2 MRCNN | Inference and Submission fb3065 (v3) [5th epoch] | 0.296 |
NA |
| Detectron2 MRCNN | Inference and Submission (v1) [model_best_7] | 0.305 |
200 |
| epoch | threshold | LB | CV |
|---|---|---|---|
| 6th_epoch | False | 0.231 |
0.2777098 |
| 6th_epoch | True | 0.285 |
0.265784 |
| 5th_epoch | False | 0.233 |
0.277636 |
| 5th_epoch | True | 0.285 |
0.26764786 |
The table shows that the cv and lb is corelated +ve ly, coz if you see the threshold =True for both the epochs, then the LB is increasing than cv.
-
Residual Unet with Attention+ EDA +TTA + W&B🧬80+ Upvotes







