Biases Beyond Observation

Berkeley Distinguished Lectures in Data Science

Most proposed fairness measures for machine learning are observational in that they depend only on the joint distribution of the features, predictor, and outcome. Dr. Hardt will highlight a few useful observational criteria before arguing why observational criteria in general are unable to resolve questions of fairness conclusively. Moving beyond observational criteria, he will outline a causal framework for reasoning about discrimination based on sensitive characteristics.

The Berkeley Distinguished Lectures in Data Science, co-hosted by the Berkeley Institute for Data Science (BIDS) and the Berkeley Division of Data Sciences, features faculty doing visionary research that illustrates the character of the ongoing data, computational, inferential revolution.  In this inaugural Fall 2017 "local edition," we bring forward Berkeley faculty working in these areas as part of enriching the active connections among colleagues campus-wide.  All campus community members are welcome and encouraged to attend.  Arrive at 3:30pm for tea, coffee, and discussion.

Speaker(s)

Moritz Hardt

Assistant Professor, EECS

Moritz Hardt was a senior research scientist at Google before joining the CS faculty at UC Berkeley in Fall 2017. Previously, he was a postdoc and then a researcher at IBM Almaden. He completed his PhD in Computer Science at Princeton University in 2011, where his advisor was Boaz Barak. Before coming to Princeton, he spent a year as a research scholar at Carnegie Mellon University and three years at Saarland University, from where he obtained a BSc and an MSc in Computer Science in 2007.