Learning motion correction from YouTube for real-time MRI reconstruction with AUTOMAP
David E J Waddington1, Christopher Chiu1, Nicholas Hindley1,2, Neha Koonjoo2, Tess Reynolds1, Paul Liu1, Bo Zhu2, Chiara Paganelli3, Matthew S Rosen2,4,5, and Paul J Keall1
1ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia, 2A. A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States, 3Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy, 4Department of Physics, Harvard University, Cambridge, MA, United States, 5Harvard Medical School, Boston, MA, United States
Today’s MRI does not have the spatio-temporal resolution to image the anatomy of a patient in real-time. Therefore, novel solutions are required in MRI-guided radiotherapy to enable real-time adaptation of the treatment beam to optimally target the cancer and spare surrounding healthy tissue. Neural networks could solve this problem, however, there is a dearth of sufficiently large training data required to accurately model patient motion. Here, we use the YouTube-8M database to train the AUTOMAP network. We use a virtual dynamic lung tumour phantom to show that the generalized motion properties learned from YouTube lead to improved target tracking accuracy.
This abstract and the presentation materials are available to members only;
a login is required.