Keywords: Image Reconstruction, Machine Learning/Artificial Intelligence
Motivation: GRAPPA and RAKI optimize purely for data consistency, completely lacking physics-driven or model-based loss terms.
Goal(s): Recurrently feed noise amplification information into k-space interpolation networks by penalizing the online computed g-factor.
Approach: JAX-implemented GRAPPA and RAKI g-factors were estimated online in each training iteration and incorporated into the optimization as an inherent network noise amplification penalty.
Results: Networks including g-factor loss outperformed implementations optimizing only for the data consistency term. Inclusion of g-factor loss terms manifested Tikhonov regularization-like effects on image noise distribution, as revealed by difference maps to the fully sampled gold standard.
Impact: Incorporating the penalty of inherent noise amplification into k-space interpolation networks reduces reconstruction noise levels compared to implementation that optimize only for data consistency. G-factor-informed reconstructions manifest Tikhonov regularization-like effects, as revealed by noise distribution on difference maps.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords