Keywords: AI Diffusion Models, Multimodal
Motivation: Multimodal image reconstruction and synthesis are important tasks in medical imaging. Current methods need multiple specialized models, adding complexity to training and deployment.
Goal(s): Develop a unified model for both multimodal image reconstruction and synthesis across diverse scenarios, streamlining training and deployment while maintaining consistent performance with specialized models.
Approach: Train an unconditional diffusion model on multimodal images and infer it to generate all target modalities from arbitrary combinations of input data.
Results: Validated on an MRI-CT-PET dataset, showing the model's capability for multimodal image reconstruction and synthesis with a single pre-trained diffusion model.
Impact: This work simplifies multimodality medical imaging by using one model for reconstruction and synthesis, reducing training and deployment complexity. It also offers the potential to enhance clinical imaging efficiency and provides a new tool for leveraging multimodality information.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords