Keywords: Machine Learning/Artificial Intelligence, Machine Learning/Artificial Intelligence, Representational LearningThe availability of limited labeled data motivates the use of self-supervised pretraining techniques for deep learning (DL) models. Here, we propose a novel contrastive loss that pushes/pulls local representations within an image based on representational constraints from co-registered multi-contrast MR images that share similar underlying parameters. For multi-organ segmentation tasks in T2-weighted images, pretraining a DL model using the proposed loss function with constraints from co-registered echo images from a radial TSE acquisition, can help reduce annotation burden by 60%. On two independent datasets, proposed pretraining improved Dice scores compared to random initialization and pretraining with conventional contrastive loss.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords