Meeting Banner
Abstract #0994

High-Fidelity Reconstruction with Instance-wise Discriminative Feature Matching Loss

Ke Wang1, Jonathan I. Tamir1,2, Stella X. Yu1,3, and Michael Lustig1
1Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States, 2Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, United States, 3International Computer Science Institute, University of California, Berkeley, Berkeley, CA, United States

Machine-learning based reconstructions have shown great potential to reduce scan time while maintaining high image quality. However, commonly used per-pixel losses for the training don’t capture perceptual differences between the reconstructed and the ground truth images, leading to blurring or reduced texture. Thus, we incorporate a novel feature representation-based loss function with the existing reconstruction pipelines (e.g. MoDL), which we called Unsupervised Feature Loss (UFLoss). In-vivo results on both 2D and 3D reconstructions show that the addition of the UFLoss can encourage more realistic reconstructed images with much more detail compared to conventional methods (MoDL and Compressed Sensing).

This abstract and the presentation materials are available to 2020 meeting attendees and eLibrary customers only; a login is required.

Join Here