Meeting Banner
Abstract #3807

Model-based deep learning reconstruction by SmartSpeed AI for head and neck contrast-enhanced 3D-T1 weighted imaging

Noriyuki Fujima1, Junichi Nakagawa1, Jihun Kwon2, Masami Yoneyama2, and Kohsuke Kudo3
1Hokkaido University Hospital, Sapporo, Japan, 2Philips Japan Ltd, Tokyo, Japan, 3Faculty of Medicine, Graduate School of Medicine, Hokkaido University, Sapporo, Japan

Synopsis

Keywords: Head & Neck/ENT, Head & Neck/ENT

Motivation: Fat-suppressed (Fs) contrast-enhanced (CE) three-dimensional (3D) T1-weighted imaging (T1WI) enables the clear visualization of head and neck structures; however, it requires a long scanning time to obtain high quality images.

Goal(s): To demonstrate the utility of model-based deep learning (DL) reconstruction, named SmartSpeed AI, for the acquisition of Fs-CE-3D T1WI of the head and neck.

Approach: Three reconstruction techniques were compared for head and neck Fs-CE-3D T1WI: 1) conventional compressed-sensing sensitivity-encoding (CS), 2) CS followed by end-to-end DL reconstruction, and 3) SmartSpeed AI.

Results: SmartSpeed AI provided the superior image quality than other two reconstruction techniques.

Impact: SmartSpeed AI, a model-based deep learning deep learning reconstruction technique, demonstrated improved image quality in head and neck Fs-CE-3D T1WI, even with a high reduction factor of 12, compared to conventional CS and CS followed by end-to-end deep learning reconstruction.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords