One World Seminar Series on the 

Mathematics of Machine Learning

The One World Seminar Series on the Mathematics of Machine Learning is an online platform for research seminars, workshops and seasonal schools in theoretical machine learning. The focus of the series lies on theoretical advances in machine learning and deep learning as a complement to the one world seminars on probability, on Information, Signals and Data (MINDS), on methods for arbitrary data sources (MADS), and on imaging and inverse problems (IMAGINE).

The series was started during the Covid-19 epidemic in 2020 to bring together researchers from all over the world for presentations and discussions in a virtual environment. It follows in the footsteps of other community projects under the One World Umbrella which originated around the same time.

We welcome suggestions for speakers concerning new and exciting developments and are committed to providing a platform also for junior researchers. We recognize the advantages that online seminars provide in terms of flexibility, and we are experimenting with different formats. Any feedback on different events is welcome.

Next Event

Wed Mar 29
12 noon ET

Images and fibers of the realization map for architectures of feedforward ReLU neural networks

The system of neural architectures is key to the mathematical richness and success of neural networks for at least two reasons.  Firstly, the stratification of the space of all functions according to which architecture(s) can realize the function exploits a tension between flexibility and rigidity -- in the sense that any function can be approximated within a sufficiently complicated architecture, but each architecture can represented only a small subset of the space of all functions.  Secondly, a neural architecture provides a parameterization of the set of associated functions -- the realization map takes a point in parameter space (a list of weight and biases) and outputs a function.  The image of the realization map (for a fixed choice of architecture) is the set of functions that can be represented by that architecture; a fiber of the realization map is a set of parameters that  all determine the same function.  Neural networks practitioners interact with the space of all functions by selecting an architecture and then  using (e.g. during training) the parameterization of function space provided by the realization map on parameter space.  Thus, for any fixed choice of architecture, an important question is "What are the image and fibers of the realization map?"

I will present some partial answers to this question for architectures of feedforward ReLU neural networks.  In particular, I will present examples of fibers with nontrivial topological structures.  I will also describe results about the unbounded rays in and dimension of the images of neural network maps of given architectures, and constraints imposed on the topology of sublevel sets by the architecture.  This is based on joint work with Eli Grigsby, David Rolnick, Robert Meyerhoff, and Chenxi Wu

Mailing List and Google Calendar

Sign up here to join our mailing list and receive announcements. If your browser automatically signs you into a google account, it may be easiest to join on a university account by going through an incognito window. With other concerns, please reach out to one of the organizers. 

Sign up here for our google calendar with all seminars.


Seminars are held online on Zoom. The presentations are recorded and video is made available on our youtube channel. A list of past seminars can be found here. All seminars, unless otherwise stated, are held on Wednesdays at 12 noon ET. The invitation will be shared on this site before the talk and distributed via email.


Wuyang Chen (UT Austin)

Boumediene Hamzi (Caltech)

Franca Hoffmann (University of Bonn)

Issa Karambal (Quantum Leap Africa)

Philipp Petersen (University of Vienna)

Matthew Thorpe (University of Manchester)

Tiffany Vlaar (Mila/McGill University)

Stephan Wojtowytsch (Texas A&M University)

Former Board Members

Simon Shaolei Du (University of Washington)

Surbhi Goel (Microsoft Research NY)

Chao Ma (Stanford University)

Song Mei (UC Berkeley)