Meeting Banner
Abstract #4627

Predictability Prior Driven White-Box Transformer for $$$k$$$-Space Interpolation

Chen Luo1, Taofeng Xie2, Huayu Wang3, Congcong Liu3, Liming Tang4, Jianping Zhang5, Guoqing Chen1, Qiyu Jin1, Zhuo-Xu Cui3, and Dong Liang3
1School of Mathematical Sciences, Inner Mongolia University, Hohhot, China, 2College of Computer and Information, Inner Mongolia Medical University, Hohhot, China, 3Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, 4School of Mathematics and Statistics, Hubei Minzu University, Enshi, China, 5School of mathematics and computational science, Xiangtan University, Xiangtan, China

Synopsis

Keywords: Sparse & Low-Rank Models, AI/ML Image Reconstruction, MRI reconstruction, k-space interpolation, structural low-rankness, Transformer network, interpretability.

Motivation: Existing $$$k$$$-space interpolation methods solely rely on local predictability while neglecting the dependency between missing data and the global $$$k$$$-space.

Goal(s): We seek to construct a method that can simultaneously extract local and global predictable priors dependent on $$$k$$$-space to achieve accurate interpolation of missing data in $$$k$$$-space.

Approach: We leverage globally predictable relationships in $$$k$$$-space to guide the development of an interpretable $$$k$$$-space Transformer unfolding model. Furthermore, this model incorporates the self-consistent prior of SPIRiT to characterize local predictability.

Results: Our method outperforms both the SPIRiT model, which relies on local predictability priors, and the $$$k$$$-space convolutional neural network model.

Impact: Drawing upon global and local predictability priors in $$$k$$$-space, we introduce, for the first time, a white-box Transformer for $$$k$$$-space interpolation. Our method exhibits enhanced interpretability and lower computational complexity compared to conventional Transformer, thereby presenting promising prospects.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords