Deep-learning based brain tumor segmentation using quantitative MRI
Iulian Emil Tampu1,2, Ida Blystad2,3,4, Neda Haj-Hosseini1, and Anders Eklund1,5
1Biomedical Engineering, Linkoping University, Linkoping, Sweden, 2Center for Medical Image Science and Visualization (CMIV), Linkoping University, Linkoping, Sweden, 3Department of Radiology in Linköping, Region Östergötland, Center for Diagnostics, Linkoping, Sweden, 4Department of Health, Medicine and Caring Sciences, Division of Diagnostics and Specialist Medicine, Linkoping University, Linkoping, Sweden, 5Department of Computer and Information Science, Linkoping University, Linkoping, Sweden
Manual annotation of gliomas in magnetic resonance (MR) images is a laborious task, and it is impossible to identify active tumor regions not enhanced in the conventionally acquired MR modalities. Recently, quantitative MRI (qMRI) has shown capability in capturing tumor-like values beyond the visible tumor structure. Aiming at addressing the challenges of manual annotation, qMRI data was used to train a 2D U-Net deep-learning model for brain tumor segmentation. Results on the available data show that a 7% higher Dice score is obtained when training the model on qMRI post-contrast images compared to when the conventional MR images are used.
This abstract and the presentation materials are available to members only;
a login is required.