Interpretable Representation Learning for Visual Intelligence

Duke Computer Science Colloquium
Speaker Name
Bolei Zhou
Date and Time
-
Location
LSRC D106
Notes
Lunch served at 11:45 am
Abstract

In recent years, progress in computer vision and machine learning has been profoundly enabled by deep neural networks. However, despite the superior performance of these networks, it remains challenging to understand their inner workings and explain their output predictions. My research has pioneered several novel approaches for elucidating the interpretable representations of networks emerged from solving various vision tasks. In this talk, I will first show that objects and other meaningful concepts emerge as a consequence of recognizing scenes. Then I will introduce a network dissection approach to automatically identify meaningful emergent structures and quantify their interpretability. To further explain output predictions of the networks, I will describe an approach that can efficiently identify the image regions most relevant to the prediction. It sheds light on the decision-making process of the networks and why they succeed or fail.  Finally, I will talk about ongoing efforts toward learning efficient and interpretable deep representations for video event understanding and applications in robotics and medical image analysis.

Short Biography

Bolei Zhou is a doctoral candidate in computer science at Massachusetts Institute of Technology. His research is in computer vision and machine learning, focusing on visual recognition and interpretable deep learning. He received the Facebook Fellowship, Microsoft Research Fellowship, MIT Greater China Fellowship, and his research was featured in media outlets such as TechCrunch, Quartz, and MIT News. 

Host
Henry Pfister and Cynthia Rudin