Keywords: Sparse & Low-Rank Models, AI/ML Image Reconstruction, MRI reconstruction, k-space interpolation, structural low-rankness, Transformer network, interpretability.
Motivation: Existing $$$k$$$-space interpolation methods solely rely on local predictability while neglecting the dependency between missing data and the global $$$k$$$-space.
Goal(s): We seek to construct a method that can simultaneously extract local and global predictable priors dependent on $$$k$$$-space to achieve accurate interpolation of missing data in $$$k$$$-space.
Approach: We leverage globally predictable relationships in $$$k$$$-space to guide the development of an interpretable $$$k$$$-space Transformer unfolding model. Furthermore, this model incorporates the self-consistent prior of SPIRiT to characterize local predictability.
Results: Our method outperforms both the SPIRiT model, which relies on local predictability priors, and the $$$k$$$-space convolutional neural network model.
Impact: Drawing upon global and local predictability priors in $$$k$$$-space, we introduce, for the first time, a white-box Transformer for $$$k$$$-space interpolation. Our method exhibits enhanced interpretability and lower computational complexity compared to conventional Transformer, thereby presenting promising prospects.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords