Keywords: MR Fingerprinting, AI/ML Image Reconstruction, Contrastive Learning
Motivation: MR Fingerprinting (MRF) enables simultaneous multi-parametric mapping but suffers from computational challenges and noise/artifact sensitivity due to dictionary matching.
Goal(s): This study aims to develop a novel network called CLIP-MRF to improve pattern matching in MRF. It incorporates contrastive learning to enhance quantification accuracy in accelerated MRF.
Approach: We propose a dual-encoder contrastive training method to robustly map MRF signals to tissue parameters accurately. The model maximizes similarity between matching signal-parameter pairs and minimizes mismatched ones during training.
Results: CLIP-MRF demonstrates superior performance over the state-of-the-art MRF methods in T1 and T2 quantification, reducing the reconstruction time and errors.
Impact: The CLIP-MRF network enables accurate parameter mapping and improves computational efficiency for accelerated MRF. Trained on simulated data only, the network offers robust generalization across signals with different noise/artifacts, paving the way for fast and reliable tissue quantification.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords