Keywords: Language Models, AI/ML Image Reconstruction, vision-language model, MoCo, Artifact reduction
Motivation: MRI images often suffer from motion artifacts due to patient movement, compromising image quality and leading to diagnostic inaccuracies.
Goal(s): Enhance artifact removal by integrating textual descriptions into deep learning methods.
Approach: We developed the Vision-Language Motion Correction (VLM-MoCo) method, combining image data with textual descriptions of artifact characteristics. This approach leverages a BERT-encoded framework integrated into a 3D pix2pix GAN.
Results: VLM-MoCo significantly outperformed the baseline, achieving lower NMSE and higher PSNR and SSIM values, demonstrating its effectiveness in improving image quality and artifact removal.
Impact: By integrating text descriptions into deep learning models, this method significantly enhances collaboration between clinicians and AI systems to remove MRI motion artifacts. It especially benefits patients prone to involuntary movements and transforms clinician-AI collaboration in medical imaging.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords