Mathematical Foundations of Machine Learning

Master of Science in Informatics in Grenoble
Master of Science in Industrial and Applied Mathematics
Université Grenoble Alpes - Institut National Polytechnique de Grenoble




 

Program

Part I.1   Supervised Learning
This part gives an overview of foundations of supervised learning. We will see that learning is an inductive process where a general rule is to be found from a finite set of labeled observations by minimizing the empirical risk of the rule over that set. The study of consistency gives conditions that, in the limit of infinite sample sizes, the minimizer of the empirical risk will lead to a value of the risk that is as good as the best attainable risk. The direct minimization of the empirical risk is not tractable as the latter is not derivative, hence learning algorithms find the parameters of the learning rule by minimizing a convex upper-bound (or surrogate) of the empirical risk. We present, classical strategies for unconstrained convex optimization: gradient descente, Quasi-Newton approach, and conjugate gradient descente. We present classical learning algorithms for binary classification: the perceptron, logistic regression and boosting by linking the development of these models to the Empirical Risk Minimization framework as well as the Multi-class classification paradigm. Particularly, we present Multi-Layer Perceptron as well as the back-propagation algorithm that is in use in deep learning.
 
Part I.2   Unsupervised and semi-supervised Learning
We will present generative models for clustering as well as two powerful tools for parameter estimation namely Expectation-Maximization (EM) and Classification Expectation-Maximization (CEM) algorithms. In the context of Big Data, labeling observations for learning is a tedious task. Semiu-supervised paradigm aims at learining with few labeled and a huge amount of unlabeled data. In this part we review the three families of techniques proposed in semi-supervised learning, that is Graphical, Generative and Discriminant models.
 
Part II.1   Adversarial bandits and online learning (taught by Pierre Gaillard)
  • Online prediction with expert advice
  • Online convex optimization
  • Adversarial bandits
 
Part II.2   Reinforcement learning (taught by Nicolas Gast)
  • Markov decision processes
  • Classical RL algorithms
  • Modern RL (Deep RL, MCTS)



Homework (2023/2024)

The Homework on the offline learning part is due by email before November the 5th, 11:59pm (hard deadline). Please send the .ipynb file of your work, by using [MFML HW] Your Name; as the subject of your email.

Passed MFML exams (Offline learning part)


References