Meeting Banner
Abstract #1964

A deep learning based approach to generate synthetic CT images from multi-modal MRI data

Zhuoyao Xin1, Christopher Wu2, Dong Liu3, Chunming Gu1,4,5, Jia Guo2, and Jun Hua1,4,5
1F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, United States, 2Department of Biomedical Engineering, Columbia University, New York City, NY, United States, 3Department of Neuroscience, Columbia University, New York City, NY, United States, 4Neurosection, Division of MRI Research, Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States, 5Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States

Synopsis

Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, multi-modal MRI

Motivation: Synthetic CT is a useful technique to generate CT images from MR images. Most existing methods exploit only one single MRI modality such as T1-weighted (T1w) images.

Goal(s): We aim to develop a synthetic CT method integrating dual-channel T1w+FLAIR input images.

Approach: A dual-channel, multi-task deep learning approach based on the 3D Transformer U-net was tested using a public human brain MRI-CT dataset. Its performance was compared to single-modal T1w-based CT synthesis.

Results: Our results indicate that dual-modal T1w+FLAIR images can provide richer details, particularly in pixel-level predictions compared to single-modal synthetic CT. The improvement in morphology was moderate.

Impact: The proposed framework may be used to integrate two or more MRI modalities to improve the performance of CT image synthesis.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords