Data Science Coast To Coast — Urban Informatics
Date: Thursday, April 8, 2021
Time: 12:00–1:00 PM Pacific
The Data Science Coast to Coast (DS C2C) seminar series is hosted jointly by seven academic data science institutes — BIDS, NYU’s Center for Data Science, Rice University’s Ken Kennedy Institute, Stanford Data Science, the University of Michigan’s Michigan Institute for Data Science (MIDAS), and the University of Washington’s eScience Institute, and Johns Hopkins University's Institute of Data Intensive Engineering and Science (IDIES) — to provide a unique opportunity to foster a broad-reaching data science community. In the first half of 2021, DS C2C will host five seminars, each featuring one faculty member and one postdoctoral fellow from two universities. Each speaker will give a 20-minute talk about ongoing projects and motivating issues, followed by 20 minutes of discussion with the audience. These seminars will be the launching point for follow-on research discussion meetings that will hopefully lead to fruitful collaborative research.
Quantifying and Mitigating Sources of Bias in a Decision-support System
Arya Farahi, Michigan Data Science Fellow, University of Michigan
Applications of AI decision-support systems are increasingly shaping the fabric of our society. These systems can exhibit and exacerbate undesired biases that might hurt the under-represented population. Therefore, it is critical to evaluate these systems not only from a lens of predictive power and the rate of error but also from a lens of trustworthiness and fairness. In this talk, I will focus on two specific sources of bias in a decision-support system and propose mitigation strategies. In the first part, I will discuss biases originated from historical decisions and are reflected in data. I propose a metric of quantifying disparity in data and illustrate how we can alleviate these historical biases by applying simple modification to a decision-making system. In the second part, I will shed light on biases that are originated from predictive models. Predictive models are a central part of any decision-making system. The end-user act based on the information provided by these models. Biased or untrustworthy information mislead the end-user or incentivize the public to mistrust the system. I will present our mitigation method KiTE. KiTE is a hypothesis-testing framework with provable guarantees that enables practitioners to (i) test whether a model provides trustworthy information with respect to each sub-group of a population and (ii) estimate and correct for prediction bias at the individual and group levels.
Revealing the "Big Lie”: Collaborative Data Science for Rapid Response to Online Disinformation
Kate Starbird, Associate Professor of Human-Centered Design and Engineering, University of Washington
In this talk, I’ll present preliminary research results from ongoing efforts to understand the spread of disinformation about the 2020 Election. First, I’ll describe the mission, structure, and everyday work practices of the Election Integrity Partnership — a multi-stakeholder collaboration that addressed mis- and disinformation about the 2020 U.S. election in (near) real-time through rapid response data science. Next, I’ll take you through some of our analyses to show how the “Big Lie” — the sustained effort to sow doubt in the results of the 2020 election — took shape on social media platforms throughout the latter half of 2020. I’ll highlight the participatory nature of this disinformation campaign and reveal some of the “super spreader” accounts that helped produce and sustain it. Finally, I’ll note how some of the social media platforms have evolved their strategies to address this kind of disinformation and wrap up by talking about what might come next, both in terms of platform policies and future collaborations for rapid response to disinformation.
All events in the series are free to attend, and all who are interested are welcome and encouraged to participate. Questions may be directed to Jing Liu (ljing@umich.edu), Managing Director of MIDAS.