Keywords: Analysis/Processing, Machine Learning/Artificial Intelligence, image synthesis, contrast, deep learning, imputation
Motivation: Learning-based synthesis of unacquired target contrasts from acquired source contrasts can lower costs associated with multi-contrast protocols. Transformer models recently established as state-of-the-art in multi-contrast MRI synthesis suffer from limited spatial precision and high computational burden.
Goal(s): Our goal was to develop a new learning-based method for image synthesis that offers high contextual sensitivity, precision and efficiency.
Approach: We introduced a novel method, I2I-Mamba, that episodically fuses convolutional operators for local precision with state-space operators for contextual sensitivity.
Results: Higher synthesis performance was obtained with I2I-Mamba over previous state-of-the-art synthesis methods based on convolutional and transformer backbones, and over a conventional SSM baseline.
Impact: The extended scope of multi-contrast protocols enabled through I2I-Mamba may facilitate comprehensive MRI exams in numerous applications, including assessment of pediatric and elderly individuals in need of rapid scans given limited motor control and vulnerability to toxicity from contrast agents.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords