Meeting Banner
Abstract #1953

Scalable and Interpretable Neural MRI Reconstruction via Layer-Wise Training

Batu Ozturkler1, Arda Sahiner1, Mert Pilanci1, Shreyas Vasanawala2, John Pauly1, and Morteza Mardani1
1Electrical Engineering, Stanford University, Stanford, CA, United States, 2Radiology, Stanford University, Stanford, CA, United States

Deep-learning based reconstruction methods have shown great promise for undersampled MR reconstruction. However, their lack of interpretability, and the nonconvex nature impedes their utility as they may converge to undesirable local minima. Moreover, training deep networks in high-dimensional imaging applications such as DCE, and 4D flow requires large amounts of memory that may overload GPUs. Here, we advocate a layer-wise training method amenable to convex optimization, and scalable for training 3D-4D datasets. We compare convex layer-wise training to traditional end-to-end training. The proposed method matches the reconstruction quality of end-to-end training while it is interpretable, convex, and demands less memory.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords