Towards Fully Automated Interpretation of Volumetric Medical Images

Master's Defense
Speaker Name
Rachel Draelos
Date and Time
-
Location
LSRC D344
Abstract

Computed tomography (CT) is a medical imaging technique used for the diagnosis and management of numerous conditions, including cancer, fractures, and infections. Automated interpretation of CT scans using machine learning holds immense promise: it may accelerate the radiology workflow, bring radiology expertise to underserved areas, and reduce missed diagnoses caused by human error. Unfortunately, several obstacles have thus far prevented large-scale deployment of automated CT interpretation systems: (1) the arduousness of manually acquiring structured abnormality labels needed to train machine learning models; (2) the difficulty acquiring and preparing a sufficient number of CT volumes to train models, as the CT volumes are siloed away at individual institutions and stored in a raw format incompatible with popular machine learning frameworks; and (3) the question of how to best formulate the subtasks of CT interpretation that comprise a radiologist’s work, and construct models to solve these tasks. This work presents efforts to address these challenges: (1) a hybrid machine learning and rule-based approach for accurate extraction of structured abnormality labels from free-text radiology reports; (2) a data set of 36,861 CT volumes appropriately prepared for training and evaluation of machine learning models; and (3) an initial formulation of CT interpretation as a multilabel classification task and results of convolutional neural network models on this task. Future work proposed includes (1) "location and abnormality" models which simultaneously predict abnormality type and location, and (2) trainable soft attention models for weakly supervised abnormality localization.

Host
Advisor: Lawrence Carin Committee: Ronald Parr, Cynthia Rudin, Ricardo Henao Giraldo