Bispectral Networks for Robust Group-Invariant Representation Learning

BIDS Machine Learning and Science Forum

ML&Sci Forum

May 16, 2022
11:00am to 12:00pm
Virtual Participation

BIDS Machine Learning and Science Forum
Date: Monday, May 16, 2022
Time: 11:00 AM - 12:00 PM Pacific Time
Location: Participate remotely using this Zoom link 

Bispectral Networks for Robust Group-Invariant Representation Learning

Speaker: Sophia Sanborn, Postdoctoral Scholar​, Redwood Center for Theoretical Neuroscience, UC Berkeley

Abstract: A fundamental problem in machine learning is to embed data into a space that eliminates factors of variation irrelevant to a task. Classic approaches to achieving invariance in deep learning (i.e. pooling) are highly lossy in that they throw out much of the signal structure along with the variation. This lack of selectivity may underlie the many well-known failure modes of deep networks, such as excessive invariance and susceptibility to adversarial perturbations. We propose grounding the problem of invariant representation learning in group theory. For many natural datasets, variations are highly structured, as they arise from the symmetries and geometry of the space in which the data lie. Consequently, many of the transformations occurring in natural data can be described in terms of the actions of Lie groups on manifolds. In this talk, I introduce a novel neural network architecture (Bispectral Networks) that is based on the ansatz of the bispectrum, an analytically defined group invariant that is complete—that is, it preserves all signal structure while removing only the variation due to group actions. Bispectral Networks can be used to learn representations that are invariant to unknown arbitrary commutative group structure in data. Here, I demonstrate that Bispectral Networks are able to recover the group structure present in several datasets, with the trained models learning the group's irreducible representations. Moreover, the trained networks are complete and highly robust to adversarial attacks.

The BIDS Machine Learning and Science Forum meets biweekly to discuss current applications across a wide variety of research domains in the physical sciences and beyond. Hosted by BIDS Affiliates Uroš Seljak (professor of Physics at UC Berkeley) and Ben Nachman (physicist at Lawrence Berkeley National Laboratory), these active sessions bring together domain scientists, statisticians, and computer scientists who are either developing state-of-the-art methods or are interested in applying these methods in their research.  To receive email notifications about upcoming meetings, or to request more information, please contact the organizers at berkeleymlforum@gmail.comAll interested members of the UC Berkeley and Berkeley Lab communities are welcome and encouraged to attend. 


Sophia Sanborn

Redwood Center for Theoretical Neuroscience, UC Berkeley

Sophia Sanborn is a Postdoctoral Scholar in UC Berkeley’s Redwood Center for Theoretical Neuroscience and UC Santa Barbara’s Department of Electrical and Computer Engineering. Her research lies at the intersection of applied mathematics, computational neuroscience, and machine learning. In particular, she uses methods from group theory and differential geometry to model neural representations in biology and construct artificial neural networks that reflect and respect the symmetries and geometry of the natural world. She received her PhD from UC Berkeley in 2021 and has worked as a researcher in the Intel Neuromorphic Computing Lab and Intel AI. She is the recipient of several fellowships and awards, including the NSF GRFP, the Beinecke Scholarship, and the UBC Early Career Invited Lecture Award.