Thematic day on the training of continuous ResNets

The thematic day is scheduled for Friday, August 14th. All presentations will be given on zoom. The zoom link is https://princeton.zoom.us/j/97934438988.

Mean-Field Neural ODEs, Relaxed Control and Generalization Errors

We develop a framework for the analysis of deep neural networks and neural ODE models that are trained with stochastic gradient algorithms. We do that by identifying the connections between control theory, deep learning and theory of statistical sampling. We derive Pontryagin's optimality principle and study the corresponding gradient flow in the form of Mean-Field Langevin dynamics (MFLD) for solving relaxed data-driven control problems. Subsequently, we study uniform-in-time propagation of chaos of time-discretised MFLD. We derive explicit convergence rate in terms of the learning rate, the number of particles/model parameters and the number of iterations of the gradient algorithm. In addition, we study the error arising when using a finite training data set and thus provide quantitive bounds on the generalisation error. Crucially, the obtained rates are dimension-independent. This is possible by exploiting the regularity of the model with respect to the measure over the parameter space.


This is joint work with J.-F. Jabir. A recording of the talk can be found here.

Machine Learning from a Continuous Viewpoint

We present a continuous formulation of machine learning, as a problem in the calculus of variations and differential-integral equations, very much in the spirit of classical numerical analysis and statistical physics. We demonstrate that conventional machine learning models and algorithms, such as the random feature model, the shallow neural network model and the residual neural network model, can all be recovered as particular discretizations of different continuous formulations. Specifically, we will mainly focus on the flow-based model and the connection to residual networks.


A recording of the presentation can be found here.

Optimization Of Neural Network: A Continuous Depth Limit Point Of View And Beyond

To understand the mystery of neural networks, we introduce a brand new perspective on the understanding of deep architectures which consider ODE as the continuum limits of deep neural networks (neural networks with infinite layers). In this framework, optimizing a deep neural network becomes an optimal control problem and from this perspective, the optimality condition (i.e. the Pontryagin’s Maximum Principle) can derive the original backpropagation algorithm. We adopt this view both on theoretical understanding and empirical algorithm design for neural networks. Theoretically, we propose a new limiting ODE model of ResNets using mean-field analysis, which enjoys a good landscape in the sense that every local minimizer is global. Empirically, we adopt this frame to design algorithms to design fast adversarial training. This talk is based on our publication at Neurips2019 and ICML2020, joint work with Bin Dong, Zhanxing Zhu, Lexing Ying, Jianfeng Lu and et al. Here is a summarizing video.

A recording of the talk in this seminar can be found here.

With any further questions, please reach out to one of the organizers Chao, Song or Stephan.