Keywords: AI/ML Image Reconstruction, Machine Learning/Artificial Intelligence, Vision transformer, accelerated parameter mapping, T2 mapping
Motivation: Accurate quantification for parameter mapping requires sufficient sampling of temporal signal evolution. Current DL-based approaches to learn parameter maps with fewer multi-contrast images often rely on fixed input parameters, limiting their flexibility
Goal(s): To learn temporal characteristics of underlying tissues in multi-contrast MR images to provide a flexible DL model for accelerated quantitative T2-mapping.
Approach: A vision transformer (T2-ViT) is combined with masked auto-encoder training to learn model-free T2 signal evolution given random temporal under-sampling.
Results: Given the first three TE images, the model can predict T2w images at longer TE times with high structural similarities and low T2-estimation errors, making acceleration possible.
Impact: An understanding of underlying temporal characteristics of tissues with vision transformers can help with intelligent design of current multi-contrast data acquisition schemes.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords