Meeting Banner
Abstract #4866

Visualizing the “ideal” input MRI for synthetic CT generation with a trained deep convolutional neural network: Can we improve the inputs for deep learning models?

Andrew P. Leynes1,2 and Peder E.Z. Larson1,2

1University of California San Francisco, San Francisco, CA, United States, 2UC Berkeley - UC San Francisco Joint Graduate Program in Bioengineering, Berkeley and San Francisco, CA, United States

Deep learning has found wide application in medical image reconstruction, transformation, and analysis tasks. Unlike typical machine learning workflows, MRI researchers are able to change the characteristics of images that are used as inputs to deep learning models. We proposed an algorithm that allows us to visualize the “ideal” input images that would provide the least error for a trained deep neural network. We apply this visualization technique on a deep convolutional neural network that converts Dixon MRI to synthetic CT images. We briefly characterize the optimization behavior and qualitatively analyze the features of the “ideal” input image.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords