Recent work on Interpretable Machine Learning
||Thursday, September 28, 2017
||12:00pm - 1:00pm
||D344 LSRC, Duke
This issue of interpretability in predictive modeling is particularly important, given that the US government currently pays private companies for black box predictions that are used throughout the US Justice System. Do we really trust a black box model to make decisions on criminal justice? Propublica claimed that we should not. In particular, the black box predictions purchased by the US government are potentially biased. The US government could have tried to prove that no white box (interpretable) model exists that has the same accuracy, but they did not attempt that. For decisions of this gravity - for justice standards, healthcare, energy reliability or other critical infrastructure standards - we should consider interpretable models before resorting to
a black box.
In this talk I will discuss algorithms for interpretable machine learning. Some of these algorithms are designed to create proofs of nearness to optimality. I will focus on some of our most recent work, including (1) work on optimal rule list models using customized bounds and data structures (these are an alternative to CART) (2) work on optimal scoring systems (alternatives to logistic regression + rounding).
Since we have methods that can produce optimal or near-optimal models, we can use them to produce interesting new forms of interpretable models. These new forms were simply not possible before, since they are almost impossible to produce using traditional techniques (like greedy splitting and pruning).
(3) Falling rule lists
(4) Causal falling rule lists
(5) Cost-effective treatment regimes
Work on (1) is joint with postdoc Elaine Angelino, students Nicholas Larus-Stone and Daniel Alabi, and colleague Margo Seltzer. Work on (2) is joint with student Berk Ustun. Work on (3) and (4) are joint with students Fulton Wang and Chaofan Chen, and (5) is joint with student Himabindu Lakkaraju.
Drafts are here for (1) and (2) (both papers are current work):
Certifiably Optimal Rule Lists
Longer version of KDD 2017 paper (oral)
Learning Risk Scores from Large-Scale Datasets
Longer version of KDD 2017 paper
Other papers for (3), (4), (5) are on my website:
Cynthia Rudin is an Associate Professor in the Departments of Computer Science and Electrical and Computer Engineering at Duke University.