Meeting Banner
Abstract #5003

Quality assessment of MR images: Does deep learning outperform machine learning with handcrafted features on new sites?

Prabhjot Kaur1, John S Thornton2,3, Frederik Barkhof1,2,4, Tarek A. Yousry2,5, Sjoerd Vos1,2,6, and Hui Zhang1
1Centre for Medical Image Computing and Department of Computer Science, University College London, London, United Kingdom, 2Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom, 3Queen Square Centre for Neuromuscular Diseases, Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, London, London, United Kingdom, 4Radiology & Nuclear Medicine, VU University Medical Center, Amsterdam, Netherlands, 5Queen Square Centre for Neuromuscular Diseases, Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, London, United Kingdom, 6Centre for Microscopy, Characterisation, and Analysis, The University of Western Australia, Nedlands, Australia

Synopsis

Keywords: Artifacts, Brain, Quality, Deep learning, Quality Assessment

Motivation: Deep learning (DL) outperforms conventional machine learning (ML) that relies on handcrafted feature-based in many vision tasks, but its superiority in assessing brain MRI image quality for new sites/scanners is unclear.

Goal(s): Compare DL and conventional ML for quality assessment of brain MRI images from new sites/scanners.

Approach: One popular and widely accepted DL and one conventional ML method are evaluated on a multi-site dataset using leave-one-site-out approach using a binary quality label (good/bad).

Results: Averaged balanced accuracy (BA) for the DL and conventional ML approaches are comparably poor (0.60+-0.12 and 0.54+-0.12, respectively) and does not exceed 0.76, suggesting room for improvement.

Impact: Widespread adoption of automated quality assessment of brain MRI images is limited by a lack of generalizability. By comparing popular DL and conventional ML approaches, we find comparable but limited generalizability. This underscores the need for future algorithm development.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords