Skip to content

xy0806/congeneric_renderer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

congeneric_renderer

[Usage]

    1. Start with the .ini file in the /outcome folder.
    2. In the 'train' mode, you can train the model with the self-play manner to train the renderer, segmentor and all discriminators.
    3. Training images should have size 320x320.
    4. In the 'test' mode, you can then test an image by 'training' all the models in an on-line and weakly supervised way. 
    5. The 'test' mode will firstly render the whole training dataset for a more uniform intensity distribution, as we describe in the MICCAI paper.

10 test prediction results along the 25 iterations are attached in the results folder for your reference. Segmentation refinement can be observed as the iteration increases. Currently, this method only works on two fetal head datasets, and doesn't present advantages on other more complex tasks.

[Basic Problem Illustration] Segmentation performance drop due to the appearance shift among different ultrasound machine. image

Improvement curve of DICE along 25 iterations. image image image

Improved segmentation results along 25 iterations. image

If the code is helpful for your research, please cite our code as:

@inproceedings{yang2018generalizing,
  title={Generalizing deep models for ultrasound image segmentation},
  author={Yang, Xin and Dou, Haoran and Li, Ran and Wang, Xu and Bian, Cheng and Li, Shengli and Ni, Dong and Heng, Pheng-Ann},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={497--505},
  year={2018},
  organization={Springer}
}

About

Code of MICCAI 2018 Paper about Cross-Device Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages