Keywords: Image Reconstruction, Diffusion Reconstruction
Motivation: An insufficiently theoretical relationship between optimization theory and Transformer architecture design.
Goal(s): Improved MRI reconstruction integrates Transformer and nonlinear diffusion model achieving white-box in deep learning.
Approach: By leveraging the properties of fractional Laplacian operators to effectively capture global information in images, we constructed a Transformer-like architecture as a fractional-order nonlinear diffusion model serving as a regularization term.
Results: Since the learning parameters in the Transformer-like architecture are the weight coefficients of the fractional Laplacian operator, this method exhibits strong interpretability. Numerical experiments demonstrate that the reconstruction performance of the proposed method is superior.
Impact: We constructed a Transformer-like architecture using fractional Laplacian operators to establish an MRI reconstruction model. The parameters of the Transformer-like architecture can be interpreted as coefficients of the fractional Laplacian operator, making the model a fully interpretable deep learning reconstruction.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords