Meeting Banner
Abstract #2849

Learning MR-guided PET reconstruction in image space using a convolutional neural network - application to multiple PET tracers

Georg Schramm1, David Rigie2, Thomas Vahle3, Ahmadreza Rezaei1, Koen van Laere1, Timothy Shepherd4, Johan Nuyts1, and Fernando Boada2
1Department of Imaging and Pathology, Division of Nuclear Medicine, KU Leuven, Leuven, Belgium, 2Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York City, NY, United States, 3Siemens Healthcare GmbH, Erlangen, Germany, 4Department of Neuroradiology, NYU Langone Health, Department of Radiology, New York University School of Medicine, New York City, NY, United States

Recently, we could show that MR-guided PET reconstruction can be mimicked in image space using a convolutional neural network which facilitates the translation of MR-guided PET reconstructions into clinical routine. In this work, we test the robustness of our CNN against the used input PET tracer. We show that training the CNN with PET images from two different tracers ([18F]FDG and [18F]PE2I), leads to a CNN that also performs very well on a third tracer ([18F]FET) which was not the case when the network was trained on images from one tracer only.

This abstract and the presentation materials are available to members only; a login is required.

Join Here