Keywords: AI/ML Image Reconstruction, Machine Learning/Artificial Intelligence
Motivation: The high memory demand of model-based deep learning algorithms restricts their application in large-scale (eg., 3D/4D) applications. Moreover, their robustness to input perturbations is not well-studied.
Goal(s): To realize a memory efficient MoDL framework with similar theoretical guarantees as compressed sensing methods, while offering state-of-the-art performance.
Approach: We introduce a memory-efficient deep equilibrium framework with theoretical guarantees on uniqueness, convergence, and robustness.
Results: The proposed scheme offers comparable performance to state of the art methods, while being 10 times more memory-efficient. Additionally, the proposed scheme is significantly more robust to Gaussian and adversarial input perturbations.
Impact: The proposed approach results in greater than 10x reduction in memory demand, which enables the application of MoDL algorithms in large-scale (3D/4D) applications. The theoretically guaranteed robustness of the proposed algorithm reduces the error amplification in highly under-sampled settings.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords