MATH OF MACHINE LEARNING 2020

FEBRUARY 19-22, 2020

About

Sirius University of Science and Technology and HDI Lab of HSE University invite you to join Winter School aimed at undergraduate/graduate students and young postdoctoral fellows from pure and applied mathematics. Participants will learn about modern trends in machine learning with a strong emphasis on the mathematics behind them. This intense four-day workshop will consist of 3 interdisciplinary mini-courses by world-class scientists, poster session by participants and master classes by industrial partners. Key topics of the Winter School:

  • Reinforcement learning
  • Deep learning
  • Computation optimal transport
  • MCMC
  • Statistical inference

Mini-courses

Gabriel Peyre

CNRS, ENS

Computational Optimal Transport

Optimal transport (OT) is a fundamental mathematical theory at the interface between optimization, partial differential equations and probability. It has recently emerged as an important tool to tackle a surprisingly large range of problems in data sciences, such as shape registration in medical imaging, structured prediction problems in supervised learning and training deep generative networks. This course will interleave the description of the mathematical theory with the recent developments of scalable numerical solvers. This will highlight the importance of recent advances in regularized approaches for OT which allow one to tackle high dimensional learning problems. Material for the course (including a small book, slides and computational resources) can be found online at https://optimaltransport.github.io/.

Lukasz Szpruch

Alan Turing Institute

Mean-Field Langevin SDEs, Stochastic Control and Deep Learning

Deep neural networks trained with stochastic gradient descent are becoming ubiquitous in technology and science. However, our fundamental understanding of these methodologies is lagging far behind. In this course, I will present a mathematical framework that can be used to understand non-convex optimisation and generalisation in deep neural networks.

Several different theories has been put forward to help understand deep learning. The most promising direction builds on the theory of gradient flows on the space of probability measures. The picture that emerges from these works is that one can view learning of the optimal weights in deep neural nets as a sampling problem. The aim of the algorithm is to find optimal distribution over the parameters space (rather than optimal values of the parameters). As a consequence, individual values of the parameters are not important in the sense that different sets of weights sampled from the correct (optimal) distribution are equally good. To learn optimal weights, one needs to find an algorithm that samples from the correct distribution.

The key mathematical tools to study this phenomenon turns out to be differential calculus on the measure space which I will review.

Michal Valko

Deepmind

Reinforcement Learning

In reinforcement learning, we are learning how to act. This course will be an introduction to the models and mathematical tools used in formalizing the problem of learning and decision-making under uncertainty. In particular, we cover the mathematics of policy evaluation, planning, and learning

Conference program

Tentative list of posters

  • Achille Thin, Nikita Kotelevskii, Jean-Stanislas Denain, Leo Grinsztajn, Alain Durmus, Maxim Panov and Eric Moulines "A new method for combining Markov Chain Monte Carlo and Variational Inference"
  • Kirill Fedyanin, Evgenii Tsymbalov, Nikita Mokrov and Maxim Panov "Dropout Revisited: Improved Uncertainty Estimation via Diversity Sampled Implicit Ensembles"
  • Marina Gomtsyan, Nikita Mokrov, Maxim Panov and Yury Yanovich "Geometry-Aware Maximum Likelihood Estimation of Intrinsic Dimension"
  • Darina Dvinsikh and Alexander Gasnikov "SA vs SAA for population Wasserstein barycenter calculation"
  • Alexey Kroshnin, Alexandra Suvorikova and Vladimir Spokoiny "Statistical inference on Bures-Wasserstein space: from theory to practice"
  • Taras Khakhulin, Roman Schutski and Ivan Oseledets "Graph Policy for Solving Tree Decomposition via Reinforcement Learning Heuristics"
  • Maksim Kuznetsov, Daniil Polykovskiy, Dmitry Vetrov , Alexander Zhebrak "A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models"
  • Nikita Puchkin and Vladimir Spokoiny "Learning a manifold with an unknown dimension"
  • Maxim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic and Hoi-To Wai "Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise"
  • Denis Belomestny, Eric Moulines, Sergey Samsonov and Nurlan Shagadatov "Variance reduction for MCMC methods via martingale representations"
  • Alexander Korotin, Vage Egiazarian, Arip Asadulaev and Evgeny Burnaev "Wasserstein-2 Generative Networks"


List of Participants

Link

Venue

The winter school will be held at the Sirius University (Sochi, Olympiyskyi prospekt, 1, room: TBA)

Registration

Registration of any person willing to participate is mandatory. We provide accomodation for invited speakers and junior researchers (BS, MS, PhD and young postdocs) selected by the organisation committee. Since the number of places we provide is limited, participants have to submit a short CV (Eng. or Rus.). We also provide travel grants for young people from Russia. The organisation committee will carefully review all the applications and will proceed with the strongest candidates based on their skills and motivation.

Registration is closed.

Organisers

Program committee

Ecole Polytechnique & HSE University
WIAS, Berlin & HSE University
HSE University

OrganiZing committee

Alexey NAUMOV (HSE University)

Evgenia KULIKOVA (Yandex)

Partners