Evaluating Robustness of Neural Networks
Link to talk video: https://compsci.capture.duke.edu/Panopto/Pages/Viewer.aspx?id=f8b8abac-e471-4b79-826d-ab9001221efe
The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this talk, I'll present a series of our work on robustness evaluation and certification, including the first robustness score CLEVER, efficient certification algorithms Fast-Lin, CROWN, CNN-Cert, and probabilistic robustness verification algorithm PROVEN. Our proposed approaches are computationally efficient and provide good quality of robustness estimate/certificate as demonstrated by extensive experiments on MNIST, CIFAR and ImageNet.
Tsui-Wei (Lily) Weng is a PhD candidate at MIT EECS under supervision of Prof. Luca Daniel. Her current research focus is studying the problem of evaluating and quantifying robustness of neural networks in machine learning. She is particularly interested in developing fast algorithms as well as theoretical analysis for robustness certifications in deep neural networks. Before joining MIT EECS, she obtained her B.S in Electrical Engineering and M.S in Communication Engineering both in National Taiwan University. She also has research experience in robustness regression, uncertainty quantification in silicon photonics, microwave filter design and combinatorics. More details please see https://lilyweng.github.io/