Manual annotation of gliomas in magnetic resonance (MR) images is a laborious task, and it is impossible to identify active tumor regions not enhanced in the conventionally acquired MR modalities. Recently, quantitative MRI (qMRI) has shown capability in capturing tumor-like values beyond the visible tumor structure. Aiming at addressing the challenges of manual annotation, qMRI data was used to train a 2D U-Net deep-learning model for brain tumor segmentation. Results on the available data show that a 7% higher Dice score is obtained when training the model on qMRI post-contrast images compared to when the conventional MR images are used.
This abstract and the presentation materials are available to members only; a login is required.