Meeting Banner
Abstract #1308

Deep learning Assisted Radiological reporT (DART)

Keerthi Sravan Ravi1,2, Sairam Geethanath2, Girish Srinivasan3, Rahul Sharma4, Sachin R Jambawalikar4, Angela Lignelli-Dipple4, and John Thomas Vaughan Jr.2
1Biomedical Engineering, Columbia University, New York, NY, United States, 2Columbia Magnetic Resonance Research Center, Columbia University, New York, NY, United States, 3MediYantri Inc., Palatine, IL, United States, 4Columbia University Irving Medical Center, New York, NY, United States

A 2015 survey indicates that burnout of radiologists was seventh highest among all physicians. In this work, two neural networks are designed and trained to generate text-based first read radiology reports. Existing tools are leveraged to perform registration and then brain tumour segmentation. Feature vectors are constructed utilising the information extracted from the segmentation masks. These feature vectors are fed to the neural networks to train against a radiologist’s reports on fifty subjects. The neural networks along with image statistics are able to characterise tumour type, mass effect and edema and report tumour volumetry; compiled as a first-read radiology report.

This abstract and the presentation materials are available to 2020 meeting attendees and eLibrary customers only; a login is required.

Join Here