Sizing Neural Network Experiments

NERSC Data Seminar

Lecture

April 12, 2019
12:00pm to 1:00pm
Lawrence Berkeley National Laboratory
Get Directions

BIDS Senior Fellow Gerald Friedland will present this week's NERSC Data Seminar at Lawrence Berkeley National Laboratory.

Berkeley Lab – CS/NERSC Data Seminar
Date: Friday, April 12, 2019
Time: 12:00pm – 1:00pm
Location: 3101 Shyh Wang Hall (59-3101), LBNL
Seminar Host: Aydin Buluc, Computational Research Division, Lawrence Berkeley National Laboratory

Abstract: Most contemporary machine learning experiments are performed treating the underlying algorithms as a black box. This approach, however, fails when trying to budget large scale experiments or when machine learning is used as part of scientific discovery and uncertainty needs to be quantifiable. Using the example of Neural Networks, this talk presents a line of research enabling the measurement and prediction of the capabilities of machine learners, allowing a more rigorous experimental design process for machine learning experiments. The main idea is taking the viewpoint that memorization is worst-case generalization. My presentation is made of three parts. Based on MacKay's information theoretic model of supervised machine learning~\cite{mackay2003}, I first derive four easily applicable engineering principles to analytically determine the upper-limit capacity of neural network architectures. This allows the comparison of the efficiency of different architectures independent of a task. Second, I introduce and experimentally validate a heuristic method to estimate the neural network capacity requirement for a given learning task. Third, I outline a generalization process that successively reduces the capacity starting at the memorization estimate. I conclude with a discussion on the consequences of sizing a machine learner wrongly, which includes a potentially increased number of adversarial examples.

 

Speaker(s)

Gerald Friedland

Adjunct Assistant Professor, EECS, UC Berkeley

BIDS Faculty Affiliate Gerald Friedland is an Adjunct Assistant Professor of Electrical Engineering and Computer Sciences at UC Berkeley, and the co-founder and CTO of Brainome. His work focusses on large-scale machine learning for multimedia retrieval, and he has also worked in privacy and privacy education. He has published more than 200 peer-reviewed articles in conferences, journals, and books and also co-authored the textbook Multimedia Computing. Dr. Friedland received his doctorate in computer science from Freie Universitaet Berlin, Germany.