Manual annotations are a major bottleneck in supervised machine learning. We present a method that leverages Natural Language Processing (NLP) to generate automatic weak labels from radiology reports. We show how weak labels can be used for the image classification task of high-grade-glioma diagnostic surveillance. We apply a convolutional neural network (CNN) to classify T2w difference maps that either indicate tumor stability or instability. Results suggest that pretraining the CNN with weak labels and fine-tuning it on manually-annotated data leads to better performance (though not statistically significant) when compared to a baseline pipeline where only manually annotated data is used.
This abstract and the presentation materials are available to members only; a login is required.