Abstract: It has been observed that residual networks can be viewed as the explicit Euler discretization of an Ordinary Differential Equation (ODE). This observation motivated the introduction of so-called Neural ODEs, which allow more general discretization schemes. A very promising potential of Neural ODEs is their application for physics constrained problems, learning better generative models, and even using them as alternative models to Residual networks. In this talk, first we will review recent advances in Neural ODEs. Then, we will discuss the challenging memory constraints of Neural ODEs as compared to residual networks which has limited their successful application. Finally, we will present an extension of neural ODEs which allows evolution of the neural network parameters, in a coupled ODE-based formulation. We discuss that the Neural ODE method introduced earlier is in fact a special case of this new more general framework, and present recent results with this new framework.
The Berkeley Statistics and Machine Learning Forum meets biweekly to discuss current applications across a wide variety of research domains and software methodologies. Hosted by UC Berkeley Physics Professor and BIDS Senior Fellow Uros Seljak, these active sessions bring together domain scientists, statisticians and computer scientists who are either developing state-of-the-art methods or are interested in applying these methods in their research. Practical questions about the meetings can be directed to BIDS Fellow Francois Lanusse. All interested members of the UC Berkeley and LBL communities are welcome and encouraged to attend. To receive email notifications about the meetings and upvote papers for discussion, please register here.
Amir Gholami was a BIDS/FODA Data Science Fellow in 2018-2019, working as a postdoctoral research fellow in Berkeley AI Research Lab working under supervision of Prof. Kurt Keutzer. He received his PhD in Computational Science and Engineering Mathematics from UT Austin, working with Prof. George Biros on bio-physics based image analysis, a research topic which recieved UT Austin’s best doctoral dissertation award in 2018. Amir has extensive experience in second-order optimization methods, image registration, inverse problems, and large scale parallel computing, developing codes that have been scaled up to 200K cores. He is a Melosh Medal finalist, recipient of best student paper award in SC'17, Gold Medal in the ACM Student Research Competition, as well as best student paper finalist in SC’14. His current research includes large scale training of Neural Networks, stochastic second-order optimization methods, and robust optimization.