One World Seminar Series on the

Mathematics of Machine Learning


The One World Seminar Series on the Mathematics of Machine Learning is an online platform for research seminars, workshops and seasonal schools in theoretical machine learning. The focus of the series lies on theoretical advances in machine learning and deep learning as a complement to the one world seminars on probability, on Information, Signals and Data (MINDS), on methods for arbitrary data sources (MADS), and on imaging and inverse problems (IMAGINE).

The series was started during the Covid-19 epidemic in 2020 to bring together researchers from all over the world for presentations and discussions in a virtual environment. It follows in the footsteps of other community projects under the One World Umbrella which originated around the same time.

We welcome suggestions for speakers concerning new and exciting developments and are committed to providing a platform also for junior researchers. We recognize the advantages that online seminars provide in terms of flexibility, and we are experimenting with different formats. Any feedback on different events is welcome.

Next Event

Wed May 25
12 noon ET

Beyond Regression: Operators and Extrapolation in Machine Learning

In this talk we first suggest a unification of regression-based machine learning methods, including kernel regression and various types of neural networks. We then consider the limitations of such methods, especially the curse-of-dimensionality, and the various potential solutions that have been proposed including: (1) Barron's existence result, (2) leveraging regularity, and (3) assuming special structure in the data such as independence or redundancy. Finally, we consider operator-learning and extrapolation as emerging directions for machine learning. Operator-learning is the more developed of the two, and we show how learning operators allows intrinsic regularization, uncertainty quantification, and can represent many-to-one and one-to-many mappings. However, extrapolation remains the final frontier in machine learning, and we discuss an emerging approach and the mathematics that may underly it.

Wed June 8
12 noon ET

Symmetry in Neural Network Parameters and Non-Linearities

The success of equivariant neural networks has made clear the advantages of considering symmetries of data domains and task functions when designing model architectures. Here, we consider instead symmetries inside the parameter space of neural networks. Group actions on the parameter space which leave the loss invariant can be exploited to accelerate optimization or perform model compression. We study, in particular, a class of models with radial rescaling activations which provide an interesting alternative to pointwise activations and increase symmetry in the parameter space. We prove this class has universal approximation even in the case of fixed width and unbounded domain. We also prove a lossless compression algorithm and show that gradient descent for the compressed model corresponds to a form of projected gradient descent for the original model. In the general case, I will discuss non-linear group actions in the parameter space which can be used to teleport model parameters and increase convergence rate. This talk includes joint work with Iordan Ganev and Twan van Laarhoven as well as Bo Zhao, Nima Dehmamy, and Rose Yu.

Mailing List

Sign up here to join our mailing list and receive announcements. If your browser automatically signs you into a google account, it may be easiest to join on a university account by going through an incognito window. With other concerns, please reach out to one of the organizers.

Format

Seminars are held online on Zoom. The presentations are recorded and video is made available on our youtube channel. A list of past seminars can be found here. All seminars, unless otherwise stated, are held on Wednesdays at 12 noon ET. The invitation will be shared on this site before the talk and distributed via email.

Board

Wuyang Chen (UT Austin)

Boumediene Hamzi (Caltech)

Franca Hoffmann (University of Bonn)

Issa Karambal (Quantum Leap Africa)

Philipp Petersen (University of Vienna)

Matthew Thorpe (University of Manchester)

Tiffany Vlaar (University of Edinburgh)

Stephan Wojtowytsch (Texas A&M)

Former Board Members

Simon Shaolei Du (University of Washington)

Surbhi Goel (Microsoft Research NY)


Chao Ma (Stanford University)

Song Mei (UC Berkeley)