Machine Learning and Science Forum
Date: Monday, September 14, 2020
Time: 11:00 AM - 12:00 PM Pacific Time
Location: Participate remotely using this Zoom link
Comprehension is compression: understanding neural networks through pruning and the lottery ticket hypothesis
Michela Paganini, Facebook AI Research (FAIR)
Abstract: State-of-the-art deep learning techniques rely on over-parametrized models that are hard to deploy. On the contrary, biological neural networks are known to use efficient sparse connectivity. I will present the practice of pruning as both a practical engineering intervention to reduce model size and a scientific tool to investigate the behavior and trainability of compressed models under the "lottery ticket hypothesis" (Frankle and Carbin, 2018). I will argue that a fundamental scientific understanding of the inner workings of neural networks is necessary to build a path towards robust, efficient AI, and I will introduce research and open-source work that has facilitated the investigation of the behavior of pruned models.
The Machine Learning and Science Forum meets biweekly to discuss current applications across a wide variety of research domains in the physical sciences and beyond. Hosted by UC Berkeley Physics Professor and BIDS Faculty Affiliate Uros Seljak, these active sessions bring together domain scientists, statisticians, and computer scientists who are either developing state-of-the-art methods or are interested in applying these methods in their research. To receive email notifications about upcoming meetings, or to request more information, please contact email@example.com. All interested members of the UC Berkeley and Berkeley Lab communities are welcome and encouraged to attend. Full details about this meeting are posted here: https://bids.github.io/MLStatsForum/.