Meeting Banner
Abstract #0171

Learning to segment brain tumours using an explainable classifier

Soumick Chatterjee1,2,3, Hadya Yassin4, Florian Dubost5, Andreas Nürnberger2,3,6, and Oliver Speck1,6,7,8
1Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Magdeburg, Germany, 2Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, Magdeburg, Germany, 3Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany, 4Institute for Medical Engineerin, Otto von Guericke University Magdeburg, Magdeburg, Germany, 5Department of Biomedical Data Science, Stanford University, Stanford, CA, United States, 6Center for Behavioral Brain Sciences, Magdeburg, Germany, 7German Centre for NeurodegenerativeDiseases, Magdeburg, Germany, 8Leibniz Institute for Neurobiology, Magdeburg, Germany

Synopsis

Deep learning pipelines typically require manually annotated training data and the complex reasoning done by such methods make them appear as “black-boxes” to the end-users, leading to reduced trust. Unsupervised or weakly-supervised techniques could be a possible candidate for solving the first issue, while explainable classifiers or applying post-hoc interpretability-explainability methods on opaque classifiers may solve the second issue. This research addresses both problems by segmenting brain tumours without segmentation labels for training, using an explainable deep learning-based classifier. The classifier combined with a global pooling operation with a segmentation model to train and obtain classification results from this method.

This abstract and the presentation materials are available to members only; a login is required.

Join Here