Meeting Banner
Abstract #0520

Predicting FDG PET from Multi-contrast MRIs using Deep Learning in Patients with Brain Neoplasms

Jiahong Ouyang1, Kevin T. Chen2, Jarrett Rosenberg1, and Greg Zaharchuk1
1Stanford University, Stanford, CA, United States, 2National Taiwan University, Taipei, Taiwan

Synopsis

Keywords: Machine Learning/Artificial Intelligence, PET/MRPET is a widely used imaging technique, but it requires exposing subjects to radiation and is not offered in the majority of medical centers in the world. Here, we proposed to synthesize FDG-PET images from multi-contrast MR images using a U-Net-based network with attention modules and transformer blocks. The experiments on a dataset with 87 brain lesions in 59 patients demonstrated that the proposed method was able to generate high-quality PET from MR images without the need for radiotracer injection. We also demonstrate methods to handle potential missing or corrupted sequences.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords