A reading group organised by the Data Science group of LSE Department of Statistics. Topics involve Machine Learning techniques with a focus on the main ideas from a theoretical point of view without necessarily excluding applications of the method. See here for last year’s reading group.
The group us currently open to all members of the Department of Statistics. If you want to stop receiving emails about it contact Kayleigh Brewer.
The reading group sessions take place every second Monday, 13:00-14:00, Leverhulme library, Department of Statistics, 6th Floor, Columbia House. Lunch will be available from 12:30.
Date | Topic | Discussion lead |
---|---|---|
14th Oct, 2019 | A Theoretical Analysis of Deep Q-Learning, Yang, Xie and Wang, 2019 |
Chengchun Shi, LSE |
28th Oct, 2019 | Kernel and Mean Embedding of Distributions: A Review and Beyond, Muandet, Fukumizu, Sriperumbudur and Schölkopf, 2017 I will give an introduction intended for a wider audience to reproducing kernel Hilbert spaces (RKHSs) and the application to kernel mean embeddings of probability distributions, loosely based on the above paper. As an application, I will show how kernel mean embeddings can be used to define dependence measures, and also outline the limitations of this approach. I will argue that RKHSs have a fundamental role to play in statistical science, i.e., that they are more than just another tool. |
Wicher Bergsma, LSE |
11th Nov, 2019 | Learning Generative Models with Sinkhorn Divergences, Genevay, Peyré and Cuturi, Presentation Slides |
Bea Acciaio, LSE |
25th Nov, 2019 | Estimating the success of re-identifications in incomplete datasets using generative models | Yves-Alexandre de Montjoye, Imperial |
9th Dec, 2019 | Towards a statistical foundation of deep learning Recently a lot of progress has been made in the theoretical understanding of deep learning. One of the very promising directions is the statistical approach, which interprets deep learning as a statistical method and builds on existing techniques in mathematical statistics to derive theoretical error bounds. The talk surveys this field and describes future challenges. Presentation Slides |
Johannes Schmidt-Hieber, University of Twente |
The reading group sessions take place every second Thursday, 13:00-14:00, Leverhulme library, Department of Statistics, 6th Floor, Columbia House. Lunch will be available from 12:30.
Date. | Topic | Discussion lead |
---|---|---|
postponed | Computation, Statistics, and Optimization of random functions When faced with a data analysis, learning, or statistical inference problem, the amount and quality of data available fundamentally determines whether such tasks can be performed with certain levels of accuracy. Indeed, many theoretical disciplines study limits of such tasks by investigating whether a dataset effectively contains the information of interest. With the growing size of datasets however, it is crucial not only that the underlying statistical task is possible, but also that is doable by means of efficient algorithms. In this talk we will discuss methods aiming to establish limits of when statistical tasks are possible with computationally efficient methods or when there is a fundamental ``Statistical-to-Computational gap’’ in which an inference task is statistically possible but inherently computationally hard. This is intimately related to understanding (average-case, as opposed to the most commonly studied worst-case) computational hardness of optimizing random functions, which is tightly connected, among other things, to statistical physics, and the study of spin glasses and random geometry. |
Afonso Bandeira, ETH |
13th Feb, 2020 | Differential Privacy | Qiwei Yao, LSE |
postponed | TBA | Michalis Titsias, Google DeepMind |
postponed | TBA | Arthur Gretton, UCL |