Meeting Banner
Abstract #4194

Knowledge Distillation Enables Efficient Neural Network for Better Generalizability in MR Image Denoising and Super Resolution

Qinyang Shou1, Zechen Zhou2, Kevin Blansit2, Praveen Gulaka2, Enhao Gong2, Greg Zaharchuk2, and Ajit Shankaranarayanan2
1University of Southern California, Los Angeles, CA, United States, 2Subtle Medical Inc., Menlo Park, CA, United States

Synopsis

Keywords: Machine Learning/Artificial Intelligence, Machine Learning/Artificial Intelligence, Knowledge DistillationIn this work, knowledge distillation (KD) is investigated to improve model generalizability for image enhancement tasks. KD can allow a 35× faster convolutional network to achieve similar performance as a Transformer based model in image denoising tasks. In addition, KD can enable a single image enhancement model for both denoising and super-resolution tasks that outperforms the conventional multi-task model trained with mixed data. KD potentially allows efficient image enhancement models to achieve better generalization performance for clinical translation.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords