Machine Learning and Science Forum: Estimating Gradients of Distributions for Generative Modeling

ML&Sci Forum

April 6, 2020
1:30pm to 2:30pm
Virtual Participation

Participate remotely via this Zoom link: Full details about this meeting will be posted here:

Abstract: The gradient of a log probability density function (aka, score function) is a very useful quantity in generative modeling. One existing method for estimating the score function from data is score matching, yet it is computationally prohibitive for complex, high-dimensional datasets. To alleviate this difficulty, we propose sliced score matching, a new approach based on random projections that scales much better than score matching, enjoys strong theoretical guarantees, and suffers almost no performance loss. We demonstrate the efficacy of this method on learning deep energy-based models and training variational / Wasserstein autoencoders with implicit encoders. By directly estimating score functions from iid data samples, we propose a new framework of generative modeling that allows flexible energy-based / non-normalized model architectures, requires no sampling during training and no use of adversarial optimization. Using annealed Langevin dynamics, we are able to produce image samples with comparable quality to GANs on MNIST, CelebA and CIFAR-10 datasets. 

The Machine Learning and Science Forum (formerly the Berkeley Statistics and Machine Learning Forum) meets biweekly to discuss current applications across a wide variety of research domains in the physical sciences and beyond. Hosted by UC Berkeley Physics Professor and BIDS Senior Fellow Uros Seljak, these active sessions bring together domain scientists, statisticians, and computer scientists who are either developing state-of-the-art methods or are interested in applying these methods in their research. To receive email notifications about upcoming meetings, or to request more information, please contact berkeleymlforum@gmail.comAll interested members of the UC Berkeley and Berkeley Lab communities are welcome and encouraged to attend.


Yang Song

Computer Science Department, Stanford University

Yang Song is a fourth-year PhD student working with Stefano Ermon at Stanford University. He is affiliated with Stanford AI Lab (SAIL). His research focuses on efficient and flexible methods for understanding high-dimensional data distributions, with applications in robust machine learning. He is a recipient of the J.P. Morgan AI Research PhD Fellowship. He obtained B.S. in Physics from Tsinghua University. More information on