Meeting Banner
Abstract #0310

Improving Perfusion Image Quality and Quantification Accuracy Using Multi-contrast MRI and Deep Convolutional Neural Networks

Jia Guo1, Enhao Gong2, Maged Goubran1, Audrey P. Fan1, Mohammad M. Khalighi3, and Greg Zaharchuk1

1Radiology, Stanford University, Stanford, CA, United States, 2Electrical Engineering, Stanford University, Stanford, CA, United States, 3Global Applied Science Lab, GE Healthcare, Menlo Park, CA, United States

We propose a novel method that uses deep convolutional neural networks (dCNNs) to combine multiple contrasts from MRI, including single- and multi-delay pseudo-continuous arterial spin labeling (PCASL) and structural scans, to synthesize perfusion maps that approach the accuracy of the PET perfusion measurements. The dCNN was trained and tested on both healthy and patient datasets, and demonstrated significant improvement on both image quality (higher structural similarity and lower normalized root mean square error) and quantification accuracy (regional CBF comparable with PET) than either ASL method alone. This method may potentially be generalized to other qualitative/quantitative applications.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords