Machine Learning and Science Forum — The Large Learning Rate Phase of Deep Learning

ML&Sci Forum

October 26, 2020
11:00am to 12:00pm
Virtual Participation

Machine Learning and Science Forum 
Date: Monday, October 26, 2020
Time: 11:00 AM - 12:00 PM Pacific Time
Location: Participate remotely using this Zoom link 

The Large Learning Rate Phase of Deep Learning

Speaker: Yasaman BahriResearch Scientist, Google Brain
Abstract: Recent investigations into infinitely-wide deep neural networks have given rise to intriguing connections between deep networks, kernel methods, and Gaussian processes. Nonetheless, there are important dynamical regimes for finite-width neural networks that lie far outside the realm of applicability of these results. I will discuss how the choice of learning rate in gradient descent is a crucial factor that naturally classifies gradient descent dynamics of deep nets into two classes (a “lazy” regime and a “catapult” regime). These phases are separated by a sharp phase transition as deep networks become wider. I will describe the distinct phenomenological signatures of the two phases, how they are elucidated in a class of solvable simple models, and the implications for model performance.

The BIDS Machine Learning and Science Forum (formerly the Berkeley Statistics and Machine Learning Forum) was launched in Spring 2018 and currently meets biweekly (during the spring and fall semesters) to discuss current applications across a wide variety of research domains in the physical sciences and beyond. Hosted by BIDS Faculty Affiliate Uroš Seljak (professor of Physics at UC Berkeley) and BIDS Research Affiliate Ben Nachman (Physicist and Staff Scientist at Lawrence Berkeley National Laboratory), these active sessions bring together domain scientists, statisticians, and computer scientists who are either developing state-of-the-art methods or are interested in applying these methods in their research. All interested members of the UC Berkeley and Berkeley Lab communities are welcome and encouraged to attend.  To receive email notifications about upcoming meetings, or to request more information, please contact berkeleymlforum@gmail.com.

Speaker(s)

Yasaman Bahri

Research Scientist, Google Brain

Yasaman Bahri is a Research Scientist at Google Brain, where her present research program works towards a scientific understanding of deep learning. Typically this has involved some theoretical analysis as well empirical work to tease out phenomena. A primary goal is to try to elucidate the underpinnings of current successes and shortcomings in deep learning (beginning with supervised learning) and leverage these towards more general machine intelligence. In particular, this will involve understanding the interplay between (i) algorithms (the traditional domain of computer science) with (ii) models and (iii) tasks. She am also interested in connections between theoretical physics and machine learning where these connections would be well-motivated by the problem at hand. She was trained as a theoretical quantum condensed matter physicist and received her Ph.D. in Physics from UC Berkeley in 2017.