Meeting Banner
Abstract #3914

Learning to segment with limited annotations: Self-supervised pretraining with Regression and Contrastive loss in MRI

Lavanya Umapathy1,2, Zhiyang Fu1,2, Rohit Philip2, Diego Martin3, Maria Altbach2,4, and Ali Bilgin1,2,4,5
1Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, United States, 2Department of Medical Imaging, University of Arizona, Tucson, AZ, United States, 3Department of Radiology, Houston Methodist Hospital, Houston, TX, United States, 4Department of Biomedical Engineering, University of Arizona, Tucson, AZ, United States, 5Program in Applied Mathematics, University of Arizona, Tucson, AZ, United States

Synopsis

The availability of large unlabeled datasets compared to labeled ones motivate the use of self-supervised pretraining to initialize deep learning models for subsequent segmentation tasks. We consider two pre-training approaches for driving a CNN to learn different representations using: a) a reconstruction loss that exploits spatial dependencies and b) a contrastive loss that exploits semantic similarity. The techniques are evaluated in two MR segmentation applications: a) liver and b) prostate segmentation in T2-weighted images. We observed that CNNs pretrained using self-supervision can be finetuned for comparable performance with fewer labeled datasets.

This abstract and the presentation materials are available to members only; a login is required.

Join Here