Distributive Justice for Machine Learning: An Interdisciplinary Perspective on Defining, Measuring, and Mitigating Algorithmic Unfairness
Automated decision-making tools are increasingly in charge of making high-stakes decisions for people—in areas such as education, credit lending, criminal justice, and beyond. These tools can exhibit and exacerbate existing undesirable biases and adversely impact already disadvantaged and marginalized social groups and individuals. In this talk, I will illustrate how we can bring together tools and methods from computer science, economics, and political philosophy to define, measure, and mitigate algorithmic unfairness. In particular, I will address two key questions:
▪ Given the decision-making context, how should we define fairness as the equality of some notion of benefit or harm across socially salient groups? First, I will offer a framework to think about this question normatively. I map the recently proposed notions of group-fairness to models of equality of opportunity. This mapping provides a unifying framework for understanding these notions, and importantly, allows us to spell out the moral assumptions underlying each one of them. Second, I give a descriptive answer to the question of “fairness as equality of what?”. I mention a series of adaptive human-subject experiments we conducted to understand which existing notion best captures laypeople’s perception of
▪ How can we measure unfairness (both at the individual and group level) and bound it in a computationally efficient manner? Existing notions of fairness focus on defining conditions of fairness, but they do not offer a proper measure of unfairness. In practice, however, designers often need to select the least unfair model among a feasible set of unfair alternatives. I present (income) inequality indices from economics as a unifying framework for measuring unfairness, both at the individual- and group-level. I propose the use of cardinal social welfare functions as an alternative measure of fairness beyond equality and an effective method for bounding inequality.
Hoda Heidari is currently a Postdoctoral Associate at the Department of Computer Science at Cornell University, where she collaborates with Professors Jon Kleinberg, Karen Levy, and Solon Barocas through the AIPP (Artificial Intelligence, Policy, and Practice) initiative. Hoda’s research is broadly concerned with the societal and economic aspects of Artificial Intelligence, and in particular, the issues of fairness and explainability for Machine Learning. She utilizes tools and methods from Computer Science (Algorithms, AI, and ML) and Social Sciences (Economics and Political Philosophy) to quantify and mitigate the inequalities that arise when socially consequential decisions are automated. Her work has appeared in top-tier Computer Science venues, such as ICML, NeurIPS, KDD, AAAI, IJCAI, and EC. Before coming to Cornell, Hoda was a Postdoctoral Fellow at the Machine Learning Institute of ETH Zürich, working under the supervision of Professor Andreas Krause. Hoda completed her doctoral studies in Computer and Information Science at the University of Pennsylvania, where she was advised by Professors Michael Kearns and Ali Jadbabaie. Hoda has organized multiple academic events on the topic of her research, including a tutorial at the Web Conference (WWW) and a workshop at the Neural and Information Processing Systems (NeurIPS) conference. Beyond computer science venues, she has been invited to and participated in numerous interdisciplinary panels and discussions addressing the implications of AI for society.