One World Seminar Series on the 

Mathematics of Machine Learning

The One World Seminar Series on the Mathematics of Machine Learning is an online platform for research seminars, workshops and seasonal schools in theoretical machine learning. The focus of the series lies on theoretical advances in machine learning and deep learning as a complement to the one world seminars on probability, on Information, Signals and Data (MINDS), on methods for arbitrary data sources (MADS), and on imaging and inverse problems (IMAGINE).

The series was started during the Covid-19 epidemic in 2020 to bring together researchers from all over the world for presentations and discussions in a virtual environment. It follows in the footsteps of other community projects under the One World Umbrella which originated around the same time.

We welcome suggestions for speakers concerning new and exciting developments and are committed to providing a platform also for junior researchers. We recognize the advantages that online seminars provide in terms of flexibility, and we are experimenting with different formats. Any feedback on different events is welcome.

Next Event

Wed June 26

12 noon ET

How Over-Parameterization Slows Down Gradient Descent

We investigate how over-parameterization impacts the convergence behaviors of gradient descent through two examples. In the context of learning a single ReLU neuron, we prove that the convergence rate shifts from $exp(−T)$ in the exact-parameterization scenario to an exponentially slower $1/T^3$ rate in the over-parameterized setting. In the canonical matrix sensing problem, specifically for symmetric matrix sensing with symmetric parametrization, the convergence rate transitions from $exp(−T)$ in the exact-parameterization case to $1/T^2$ in the over-parameterized case. Interestingly, employing an asymmetric parameterization restores the $exp(−T)$ rate, though this rate also depends on the initialization scaling. Lastly, we demonstrate that incorporating an additional step within a single gradient descent iteration can achieve a convergence rate independent of the initialization scaling.

Mailing List and Google Calendar

Sign up here to join our mailing list and receive announcements. If your browser automatically signs you into a google account, it may be easiest to join on a university account by going through an incognito window. With other concerns, please reach out to one of the organizers. 

Sign up here for our google calendar with all seminars.


Seminars are held online on Zoom. The presentations are recorded and video is made available on our youtube channel. A list of past seminars can be found here. All seminars, unless otherwise stated, are held on Wednesdays at 12 noon ET. The invitation will be shared on this site before the talk and distributed via email.


Wuyang Chen (UC Berkeley)

Bin Dong (Peking University)

Boumediene Hamzi (Caltech)

Issa Karambal (Quantum Leap Africa)

Qianxiao Li (National University of Singapore)

Matthew Thorpe (University of Warwick)

Tiffany Vlaar (University of Glasgow)

Stephan Wojtowytsch (University of Pittsburgh)

Former Board Members

Simon Shaolei Du (University of Washington)

Franca Hoffmann (Caltech)

Surbhi Goel (Microsoft Research NY)

Chao Ma (Stanford University)

Song Mei (UC Berkeley)

Philipp Petersen (University of Vienna)