Meeting Banner
Abstract #2798

Attention-based two-stage network for non-cartesian multi-coil ASL MRI reconstruction

Yanchen Guo1, Shichun Chen1, Zhao Li2, Manuel Taso3, David C. Alsop3, and Weiying Dai1
1Computer Science, State University of New York at Binghamton, Vestal, NY, United States, 2Zhejiang University, Zhejiang, China, 3Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, United States

Synopsis

Keywords: Machine Learning/Artificial Intelligence, Machine Learning/Artificial Intelligence

Motivation: High-resolution arterial spin labeling (ASL) imaging is time-consuming, limiting its clinical applications in studying small brain structures.

Goal(s): To reconstruct high-resolution ASL images from 8-time accelerated ASL image acquisition, an under-sampled non-Cartesian k-space sampling.

Approach: We proposed an attention-based deep learning (DL) model.

Results: The proposed DL model can successfully reconstruct 8-fold under-sampled, non-cartesian, multi-coil data from k-space.

Impact: Our proposed attention-based deep learning model can reconstruct under-sampled non-cartesian multi-coil data in k-space and thereby significantly decrease long MRI acquisition time required for high-resolution ASL MRI imaging, which may enable clinical applications in studying small brain structures.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords