Meeting Banner
Abstract #4773

Exploring the Hallucination Risk of Deep Generative Models in MR Image Recovery

Vineet Edupuganti1, Morteza Mardani1, Joseph Cheng1, Shreyas Vasanawala2, and John Pauly1

1Electrical Engineering, Stanford University, Stanford, CA, United States, 2Radiology, Stanford University, Stanford, CA, United States

The hallucination of realistic-looking artifacts is a serious concern when reconstructing highly undersampled MR images. In this study, we train a variational autoencoder-based generative adversarial network (VAE-GAN) on a dataset of knee images and conduct a detailed exploration of the model latent space by generating extensive admissible reconstructions. Our preliminary results indicate that factors such as sampling rate and trajectory as well as loss function affect the risk of hallucinations, but with a reasonable choice of parameters deep learning schemes appear robust in recovering medical images.

This abstract and the presentation materials are available to members only; a login is required.

Join Here