November 22, 2019, 9:30 AM - 9:45 AM
The Heldrich Hotel & Conference Center
10 Livingston Avenue
New Brunswick, NJ 08901
Click here for map.
Steven Wu, University of Minnesota
Much of the literature in fair machine learning takes the "statistical group fairness" approach: first fix a small collection of high-level groups defined by protected attributes (e.g., race or gender), and then ask for approximate parity of some statistic of the classifier (e.g., positive classification rate or false positive rate) across these groups. While this approach is amenable to tractable algorithmic formulations, it does not provide fairness protection on an individual level. In contrast, the "individual fairness" approach aims to address this limitation, but it relies on strong assumptions on the data (e.g., a known similarity measure across individuals). In this talk, I will cover some of my recent work that attempts to bridge the gap between the two fairness approaches and provides efficient algorithms that satisfy a range of fairness notions that interpolate between individual and group fairness.
Speaker Bio: Steven Wu is Assistant Professor of Computer Science and Engineering at the University of Minnesota. He is broadly interested in algorithms and machine learning, especially in the areas of privacy-preserving data analysis, fairness in machine learning, and algorithmic economics. He is recipient of the Facebook Mechanism Design for Social Good Research Award, as well as a Google Faculty Research Award and a J.P. Morgan Research Faculty Award. Wu was also a participant in the DIMACS Research Experiences for Undergraduates (REU) program in 2010.