Keywords: AI/ML Image Reconstruction, AI/ML Image Reconstruction, Transformer, Swin, Attention
Motivation: Shifted Window (Swin) Vision Transformer cascades with hybrid attention have been shown to be effective at undersampled MRI reconstruction, but their large memory requirement makes long cascades computationally expensive.
Goal(s): In this work, we aim to reduce the memory footprint of the SwinV2 Transformer to improve its suitability for cascaded MRI reconstruction models.
Approach: We introduce the new SwinV2-Micro Transformer which allows the formation of longer cascades with overlapped attention and test our model against previously developed hybrid SwinV2-Tiny and other state-of-the-art methods.
Results: Our proposed SwinV2-Micro cascade outperforms the competing methods in multi-channel MRI reconstruction at multiple acceleration factors.
Impact: A smaller than conventional SwinV2-Micro Transformer architecture is proposed to facilitate the incorporation of powerful overlapped attention in Transformer cascades for MRI reconstruction. This allows the formation of longer cascades enabling higher reconstruction quality at different acceleration factors.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords