While recent deep-learning-based approaches in automatic cardiac magnetic resonance image segmentation have shown great promise to alleviate the need for manual segmentation, most are not applicable to realistic clinical scenarios. This is largely due to training on mainly homogeneous datasets, without variation in acquisition parameters and pathology. In this work, we develop a model applicable in multi-center, multi-disease, and multi-view settings, where we combine heart region detection, augmentation through synthesis and multi-fusion segmentation to address various aspects of segmenting heterogeneous cardiac data. Our experiments demonstrate competitive results in both short-axis and long-axis MR images, without physically acquiring more training data.
This abstract and the presentation materials are available to members only; a login is required.