Abstract: Spectral analysis is a foundational tool in physics, and underpins many machine learning methods. For finite-dimensional vector spaces, linear operators can be diagonalized by standard methods. However diagonalizing linear operators on high-dimensional function spaces is a challenging numerical problem. In this talk I will describe Spectral Inference Networks (SpIN), which are a scalable method to approximate eigenfunctions of linear operators by stochastic gradient descent. In a machine learning context, Spectral Inference Networks can be seen as a generalization of Slow Feature Analysis, but without many of the shortcomings of classic SFA. In computational physics, Spectral Inference Networks are closely related to Variational Monte Carlo methods. I will show applications of SpIN to learning excited states of small quantum systems, interpretable features from video and eigenoptions for reinforcement learning.