UC Berkeley's new AI Policy Hub – launched this month by the Berkeley Center for Long-Term Cybersecurity (CLTC) and Center for Information Technology Research in the Interest of Society (CITRIS) and the Banatao Institute – will train a new generation of researchers to address the consequences and risks of artificial intelligence by developing appropriate governance and policy frameworks for managing AI and its safe and beneficial implementation. BIDS Faculty Affiliates Hany Farid (EECS, School of Information), Pamela Samuelson (Berkeley Law, School of Information, and Berkeley Center for Law & Technology), and Bin Yu (Statistics, EECS) will serve as Faculty Advisors.
Call for Applications: The AI Policy Hub is currently accepting applications for its inaugural Fall 2022–Spring 2023 cohort — Apply online by Tuesday, April 26, 2022. A key goal of the AI Policy Hub is to strengthen interdisciplinary research approaches to AI policy while expanding inclusion of diverse perspectives, as both are necessary to support safe and beneficial AI into the future. Applicants will have the opportunity to conduct innovative, interdisciplinary research and make meaningful contributions to the AI policy landscape, helping to reduce the harmful effects and amplify the benefits of artificial intelligence. The AI Policy Hub will provide participants with practical training for AI policy career paths in the federal and state government, academia, think tanks, and industry. Program participants will also benefit from faculty and staff mentorship, access to world-renowned experts and training sessions, connections with policymakers and other decision-makers, and opportunities to share their work at a public symposium. UC Berkeley students actively enrolled in graduate degree programs (Master’s and PhD students) from all departments and disciplines are encouraged to apply.
Read more in the following launch announcement, cross-posted from CLTC News: UC Berkeley Launches AI Policy Hub (March 10, 2022).
Two prominent research centers at the University of California, Berkeley have joined together to launch the AI Policy Hub, an interdisciplinary initiative training forward-thinking researchers to develop effective governance and policy frameworks to guide AI, today and into the future.
The AI Policy Hub will support cohorts of UC Berkeley graduate students to conduct research and develop science-based policy recommendations for realizing the potential benefits of AI, while managing harms and reducing the risk of devastating outcomes, including accidents, abuses, and systemic threats. The researchers will share findings through symposia, policy briefings, papers, and other resources, to inform policymakers and other AI decision-makers so they can act with foresight.
The AI Policy Hub will be run jointly by two centers of excellence in AI policy research: the AI Security Initiative, part of the Center for Long-Term Cybersecurity, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS). The Hub will kick off this spring, and the first cohort of graduate student researchers will begin their work in fall 2022.
“AI systems are already causing significant harm, and the risks are multiplying as AI systems increase in complexity and scale. Meanwhile, there are growing power imbalances as the number of organizations and countries that control the majority of AI development remain small,” says Jessica Newman, Director of the AI Security Initiative. “Recommendations resulting from this research will not only provide guidance to policymakers and decision-makers today but will also assess how potential future shifts will test the resiliency of the recommendations.”
Artificial intelligence poses consequential risks to humanity, including dark patterns and manipulation, weaponization, growing inequities, and abuses of power. The AI Policy Hub will help mitigate these risks by training a new generation of forward-thinking researchers equipped with the skills to develop appropriate governance and policy frameworks for managing AI into the future.
A key goal of the AI Policy Hub is to expand and diversify the pool of researchers and decision-makers with AI policy expertise. “The AI Policy Hub will strengthen interdisciplinary research approaches to AI policy,” says Brandie Nonnecke, Director of the CITRIS Policy Lab. “Expanding inclusion of diverse perspectives is necessary to support the global development and implementation of safe and beneficial AI, today and into the future.”
The AI Policy Hub is made possible with initial financial support from the Future of Life Institute (FLI), a nonprofit organization that seeks to steer the development and use of transformative technology towards benefitting life and away from large-scale risks. The initiative will also collaborate with numerous UC Berkeley departments and centers that are contributing work on AI governance and policy, including Berkeley’s newly created Division of Computing, Data Science, and Society (CDSS) and its affiliated School of Information, the Center for Human-Compatible Artificial Intelligence (CHAI), the Berkeley Center for Law & Technology, Berkeley’s College of Engineering, and the Goldman School of Public Policy.
The Hub will harness the collective networks and expertise of CLTC and CITRIS, which have an established record of helping policymakers develop effective guidelines for the responsible use of AI. For example, CITRIS and AISI were directly involved in efforts to promote the responsible implementation of AI at the University of California and in the California state government. The AI Policy Hub aims to inspire a new generation of AI policy professionals and advance necessary interventions to promote safe and beneficial AI at institutional, state, federal, and international levels.
“Addressing the risks posed by increasingly advanced AI could not be more urgent,” says Anthony Aguirre, Executive Vice President and Head of Policy and Strategy at the Future of Life Institute. “As a leader in AI safety, security, and ethics, UC Berkeley is the perfect home for the AI Policy Hub, where world-class researchers will translate cutting-edge AI research into actionable policy insights. We look forward to working with the AI Security Initiative and the CITRIS Policy Lab on this essential effort.”
For more information about the AI Policy Hub and upcoming opportunities for students, please see https://cltc.berkeley.edu/aipolicyhub.
If you are a UC Berkeley student with inquiries about the application, or a faculty member or researcher in the field interested in collaboration or providing student mentorship, please contact Jessica Newman at email@example.com. For media inquiries, please contact Charles Kapelke at firstname.lastname@example.org. Interested in supporting our work philanthropically? Shanti Corrigan at email@example.com can gladly facilitate introductions across our team of experts and clarify the impact gifts of all sizes can make to advance our mission.