Back to the calendar
Short course : Sophie Langer -CANCELLED
By EDT stat Actu

Short course on

"(Deep) Statistical Learning"


Sophie Langer,
University of Twente


UCLouvain ISBA, Louvain-la-Neuve, 20 Voie du Romain Pays, Room B-135


27/10/2021 : 10h45 - 12h45 and 14h00 - 16h00

28/10/2021 10h45 - 12h45 and 14h00 - 16h00


This course covers foundation and recent advances of deep neural networks (DNNs) from the point of view of statistical theory. Understanding the power of DNNs theoretically is arguably one of the greatest problems in machine learning. During the last decades DNNs have made rapid process in various machine learning tasks like image and speech recognition and game intelligence. Unfortunately, little is yet known about why this method is so successful in practical applications. Recently, there are different research topics to also prove the power of DNNs from a theoretical point of view. From an aspect of statistical theory, several results could already show good convergence results for DNNs in learning different function classes.
The course is roughly divided into two parts. In the first part, DNNs are introduced and different network architectures are discussed. In the second part, we focus on the statistical theory of DNNs. Here we will introduce frameworks addressing two key puzzles of DNNs: approximation theory, where we gain insights in the approximation properties of DNNs in terms of network depth and width for various function classes and generalization, where we analyze the rate of convergence for both, regression and classification problems.

Table of content
1.    Motivation: The present great success of Deep Learning
2.    Statistical learning theory: Regression and Classification
3.    The theory behind Deep Learning:
4.    Introduction of neural networks
5.    Different network architectures and activation functions
6.    Theory of shallow neural networks
7.    The advantage of multiple hidden layers: In theory and applications
8.    Approximation theory: Which function classes can be efficiently approximated by DNNs?
9.    Generalization: Convergence rates of DNNs for different regression and classification problems. When are those networks able to circumvent the curse of dimensionality?
10.    Limits and open questions of DNNs
During the course we will have some interactive parts to consider different applications of deep learning and to get a better understanding of the theoretical results. In particular, we will compare the performance of shallow and deep networks on data examples in Python.


Registration is free, but compulsory, please send an e-mail to .

NB: People who have a technical constraint to participate to one or the other of the four sessions in the lecture room can ask for an access to a (basic) live transmission via ZOOM.


27 October - 28 October
0:00 - 0:00
ISBA Room B-135
Voie du Roman Pays 20
Louvain-la-Neuve, Belgique