How to improve your MCMC algorithms: Power of gradients and accept-reject step in MCMC algorithms

Berkeley Statistics and Machine Learning Forum

ML&Sci Forum

March 18, 2019
1:30pm to 2:30pm
1011 Evans Hall
Get Directions


Abstract: Drawing samples from a known distribution is a core computational challenge common in many disciplines, with applications in statistics, probability, operations research, and other areas involving stochastic models. Recent decades have witnessed great success of Markov Chain Monte Carlo (MCMC) algorithms suited for generating random samples. In this talk, starting with a brief intro to the topic, we will discuss user-friendly non-asymptotic guarantees for popular MCMC algorithms like Random walk, Langevin algorithms and Hamiltonian Monte Carlo. Besides making some beautiful connections between sampling and optimization, our results will provide insight into the advantages of using accept-reject step and gradients for these sampling algorithms in high dimensions.

Full details about this meeting will be posted here:

The Berkeley Statistics and Machine Learning Forum meets weekly to discuss current applications across a wide variety of research domains and software methodologies. Register here to view, propose and vote for this group's upcoming discussion topics. All interested members of the UC Berkeley and LBL communities are welcome and encouraged to attend. Questions may be directed to François Lanusse.


Raaz Dwivedi

EECS, UC Berkeley

Raaz Dwivedi is a fourth year graduate student in the department of EECS at the University of California, Berkeley, advised by Martin Wainwright and Bin Yu. He is associated with the Berkeley Artificial Intelligence Research group (BAIR), and his research interests include the interplay of Machine Learning, Optimization and Monte Carlo methods, with emphasis on better understanding of randomized algorithms used for machine learning tasks like inference, sampling and optimization.