Today’s MRI does not have the spatio-temporal resolution to image the anatomy of a patient in real-time. Therefore, novel solutions are required in MRI-guided radiotherapy to enable real-time adaptation of the treatment beam to optimally target the cancer and spare surrounding healthy tissue. Neural networks could solve this problem, however, there is a dearth of sufficiently large training data required to accurately model patient motion. Here, we use the YouTube-8M database to train the AUTOMAP network. We use a virtual dynamic lung tumour phantom to show that the generalized motion properties learned from YouTube lead to improved target tracking accuracy.
This abstract and the presentation materials are available to members only; a login is required.