Abstract: Dynamical systems are ubiquitous in science and technology. In many situations it is of interest to model the evolution of such a system over time. Deep learning is an emerging framework for this task, yet often ignoring physical insights of the system under consideration. This talk discusses a possible stability-promoting mechanism to improve the generalization performance of deep flow prediction models. Motivated by the idea of Lyapunov stability, our findings show that a stabilized model not only improves the generalization performance, but also reduces the sensitivity of choosing the tuning parameters.