Meeting Banner
Abstract #2231

Instance-level explanations in multiple sclerosis lesion segmentation: a novel localized saliency map

Federico Spagnolo1,2,3,4, Nataliia Molchanova4,5, Roger Schaer4, Mario Ocampo-Pineda1,2,3, Meritxell Bach Cuadra5,6, Lester Melie-Garcia1,2,3, Cristina Granziera1,2,3, Vincent Andrearczyk4, and Adrien Depeursinge4,7
1Translational Imaging in Neurology (ThINK) Basel, Department of Medicine and Biomedical Engineering, University Hospital Basel and University of Basel, Basel, Switzerland, 2Department of Neurology, University Hospital Basel, Basel, Switzerland, 3Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland, 4MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland, 5CIBM Center for Biomedical Imaging, Lausanne, Switzerland, 6Radiology Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland, 7Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland

Synopsis

Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, Explainability, interpretability

Motivation: The use of AI in clinical routine is often jeopardized by its lack of transparency. Explainable methods would help both clinicians and developers to identify model bias and interpret the automatic outputs.

Goal(s): We propose an explainable method providing insights into the decision process of an MS lesion segmentation network.

Approach: We adapt SmoothGrad to perform instance-level explanations and apply it to a U-Net, whose inputs are FLAIR and MPRAGE from 10 MS patients.

Results: Our saliency maps provide local-level information on the network's decisions. Predictions of the U-Net rely predominantly on lesions' voxel intensities in FLAIR and the amount of perilesional volume.

Impact: These results cast some light on the decision mechanisms of deep learning networks performing semantic segmentation. The acquired new knowledge can be an important step to facilitate AI integration into clinical practice.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords