Keywords: Other AI/ML, Machine Learning/Artificial Intelligence, Explainability, interpretability
Motivation: The use of AI in clinical routine is often jeopardized by its lack of transparency. Explainable methods would help both clinicians and developers to identify model bias and interpret the automatic outputs.
Goal(s): We propose an explainable method providing insights into the decision process of an MS lesion segmentation network.
Approach: We adapt SmoothGrad to perform instance-level explanations and apply it to a U-Net, whose inputs are FLAIR and MPRAGE from 10 MS patients.
Results: Our saliency maps provide local-level information on the network's decisions. Predictions of the U-Net rely predominantly on lesions' voxel intensities in FLAIR and the amount of perilesional volume.
Impact: These results cast some light on the decision mechanisms of deep learning networks performing semantic segmentation. The acquired new knowledge can be an important step to facilitate AI integration into clinical practice.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords