Meeting Banner
Abstract #4936

PET Synthesis from Multi-Contrast MRI with Attention-Based 3D Encoder-Decoder Networks

Ramy Hussein1, David D. Shin2, Moss Zhao1, Jia Guo3, Michael Moseley1, and Greg Zaharchuk1
1Radiology, Stanford University, Stanford, CA, United States, 2Global MR Applications & Workflow, GE Healthcare, Menlo Park, CA, United States, 3Department of Bioengineering, University of California Riverside, Riverside, CA, United States

Synopsis

We present an attention-based 3D convolutional encoder-decoder network to synthetize PET Cerebral Blood Flow (CBF) maps from multi-parametric MRI images without using radioactive tracers. Inputs to the prediction model are structural MRI (T1 and T2 fluid-attenuated inversion recovery [FLAIR]), and arterial spin labeling (ASL) perfusion MRI images. Results show that encoder-decoder networks, with attention mechanisms and customized loss functions, can adequately combine multiple MRI image types and predict the gold-standard oxygen-15-water PET CBF maps. Adequate quantification of PET from MRI has a great potential for increasing the accessibility of cerebrovascular diseases assessment for underserved populations, underprivileged communities, and developing nations.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords