Deep learning pipelines typically require manually annotated training data and the complex reasoning done by such methods make them appear as “black-boxes” to the end-users, leading to reduced trust. Unsupervised or weakly-supervised techniques could be a possible candidate for solving the first issue, while explainable classifiers or applying post-hoc interpretability-explainability methods on opaque classifiers may solve the second issue. This research addresses both problems by segmenting brain tumours without segmentation labels for training, using an explainable deep learning-based classifier. The classifier combined with a global pooling operation with a segmentation model to train and obtain classification results from this method.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords