A General Framework for Incentive-Aware Classification
Traditional machine learning methods often assume access to trustworthy data. However, this assumption fails to hold in many real-world scenarios. In particular, entities being classified may be incentivized to misreport their private information (i.e., their features) in order to receive a more desirable outcome. In such cases, the performance of traditional methods may be arbitrarily bad, and it is therefore crucial to design incentive-aware machine learning methods that are robust to such strategic behavior. In this talk, we introduce a general framework for incentive-aware classification: the partial verification framework. Under this framework, we design several incentive-aware machine learning algorithms for various application scenarios, which provide provable, and often optimal, guarantees for the respective learning tasks in the presence of strategic manipulation.
Hanrui is a fourth-year PhD student in Computer Science at Duke University, advised by Vincent Conitzer. His research interests lie in the area of Economics and Computation -- the study of problems with economic motivations that can be approached using techniques from computer science. His recent work focuses on learning and decision making in complex environments, in the presence of strategic behavior, under uncertainty of the future, with rich preferences and limited means of interaction.