Meeting Banner
Abstract #0781

Stop copying contours from Cine to LGE: multimodal learning with disentangled representations needs zero annotations

Agisilaos Chartsias1, Haochuan Jiang1, Giorgos Papanastasiou2,3, Chengjia Wang2,3, Colin Stirrat2,3, Scott Semple2,3, David Newby2,3, Rohan Dharmakumar4, and Sotirios A Tsaftaris1,5
1School of Engineering, University of Edinburgh, Institute of Digital Communications, Edinburgh, United Kingdom, 2Edinburgh Imaging Facility QMRI, Edinburgh, United Kingdom, 3Centre for Cardiovascular Science, Edinburgh, United Kingdom, 4Cedars Sinai Medical Center, Los Angeles, CA, United States, 5The Alan Turing Institute, London, United Kingdom

We propose a novel deep learning method, Multi-modal Spatial Disentanglement Network (MMSDNet), to segment anatomy in medical images. MMSDNet takes advantage of complementary information provided by multiple sequences of the same patient. Even when trained without annotations, it can segment anatomy (e.g., myocardium) in Late Gadolinium Enhancement (LGE) images, which is essential for assessing myocardial infarction. This is achieved by transferring knowledge from the simultaneously acquired cine-MR data where annotations are easier to be obtained. MMSDNet outperforms classical methods including non-linear registration, and simple copying of contours, as well as the state-of-the-art U-Net model.

This abstract and the presentation materials are available to 2020 meeting attendees and eLibrary customers only; a login is required.

Join Here