Today, “artificial intelligence” seems to be everywhere — in our phones, vacuums, hospitals, and inboxes — but it can be hard to separate science fiction from science fact. Many discussions about AI imagine a fully autonomous superintelligence that designs itself with little to no human intervention, making decisions in ways that humans cannot possibly understand. Yet the work of designing, developing, engineering, training, and testing such systems requires a massive amount of human labor, which is typically erased when such systems are released as products. In this talk, Stuart Geiger gives a human-centered, behind-the-scenes introduction to machine learning, illustrating the creative, interpretive, and often messy work humans do to make autonomous agents work. Understanding the humanity behind artificial intelligence is important if we want to think constructively about issues of bias, fairness, accountability, and transparency in AI. This event is being presented as part of the Bay Area Science Festival.
Date: November 1, 2017
Time: 7:00 pm - 8:30 pm
Venue: Restaurant Valparaiso, 1403 Solano Ave, Albany, CA 94706 Google Map
Cost: Free, open to the public, no registration required
I’m an ethnographer of science and technology, and I study the infrastructures and institutions that support the production of knowledge. Most of my previous work has been on Wikipedia, where I’ve studied the community of volunteer editors who produce and maintain an open encyclopedia. I’ve also studied distributed scientific research networks and projects, including the Long-Term Ecological Research Network and the Open Science Grid. In Wikipedia and scientific research, I’ve studied topics including newcomer socialization, community governance, specialization and professionalization, quality control and verification, cooperation and conflict, the roles of support staff and technicians, and diversity and inclusion. And, as these communities are made possible through software systems, I’m very interested in how the design of software tools and systems intersect with all of these issues.
I’m an interdisciplinary nomad who loves collaborating with people who use other kinds of methods and approaches. I began college as a computer science major at UT-Austin but switched to philosophy halfway through and got a degree in humanities. I got my MA in the Communication, Culture, and Technology program at Georgetown University, where I began empirically studying communities using qualitative and ethnographic methods. Then, I went to the UC-Berkeley School of Information for my Ph.D and worked with anthropologists, sociologists, psychologists, historians, organizational and management scholars, designers, and computer scientists. In terms of academic fields, I spend much of my time in science and technology studies, computer-supported cooperative work, and new media studies. I’m very excited to be bringing these approaches and methods to the challenges and opportunities of data science.