Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, multi-modal MRI
Motivation: Synthetic CT is a useful technique to generate CT images from MR images. Most existing methods exploit only one single MRI modality such as T1-weighted (T1w) images.
Goal(s): We aim to develop a synthetic CT method integrating dual-channel T1w+FLAIR input images.
Approach: A dual-channel, multi-task deep learning approach based on the 3D Transformer U-net was tested using a public human brain MRI-CT dataset. Its performance was compared to single-modal T1w-based CT synthesis.
Results: Our results indicate that dual-modal T1w+FLAIR images can provide richer details, particularly in pixel-level predictions compared to single-modal synthetic CT. The improvement in morphology was moderate.
Impact: The proposed framework may be used to integrate two or more MRI modalities to improve the performance of CT image synthesis.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords