COHEN-SOLAL Quentin

Artificial Intelligence

Université Paris Dauphine-PSL

quentin.cohen-solal [at] dauphine.psl.eu

Short bio

PhD at the University of Caen

Research topic

Reinforcement learning in games.

Short abstract

This postdoc focuses on the study and improvement of learning and planning algorithms in games.

EVEN Mathieu

PhD Student

Inria / ENS

mathieu.even [at] inria.fr

Short bio

M2 Orsay/Paris-Saclay

Thesis title

On Federated and Distributed Learning Problems.

Short abstract

We study Federated and Distributed learning problems, with a strong incentive on theoretical guarantees. A (possibly large) number of agents aim at making predictions (supervised or unsupervised learning). To what extent can they benefit from collaborating with each other, depending on the learning problem and the communication constraints ?

TRIBOULIN Amaury

PhD Student

Inria

amaury.triboulin [at] inria.fr

Short bio

Master’s Degree at Ecole normale supérieure

Thesis title

Symmetries in Machine Learning for Structured Data.

Short abstract

In this thesis, we will consider high-dimensional problems with an additional structure that comes from the geometry of the input signal and explore ways to incorporate this geometric structure into the learning algorithms. We have already started to investigate new architectures based on equivariant layers which we tested on combinatorial optimization problems and showed that it is possible learn representations of hard (typically NP-hard) problems. We believe this could lead to new algorithms, less resource-dependant, for learning efficient heuristics for practical instances.

SIMSEKLI Umut

Machine Learning

Inria

umut.simsekli [at] inria.fr

Short bio

I am a Research Faculty (Chargé de Recherche) at INRIA – SIERRA team and École Normale Supérieure, Computer Science Department. I received my PhD from Boğaziçi University, Computer Engineering Department in 2015. During 2016-2020, I was an associate professor at Télécom Paris, Institut Polytechnique de Paris and I spent one year as a visiting faculty at the University of Oxford, Department of Statistics during 2019-2020.

Topics of interest

Theory of Deep Learning, Markov Chain Monte Carlo.

Project in Prairie

My main research area is machine learning, including theory, algorithms, and applications. My ultimate goal has been to develop theory that closely follows practice and leads to methods that have impact on real problems. In particular, I have been interested in (i) analyzing deep learning methods from the lens of heavy-tailed dynamical systems theory, and (ii) implicit generative modeling algorithms by using tools from computational optimal transport.

Quote

Machine learning is a fascinating field, which continuously generates exciting and quite nontrivial theoretical, practical, and even societal/ethical questions. Attacking the questions from all these aspects simultaneously, the partial solutions offered by machine learning researchers have only posed additional questions and revealed many interesting and sometimes surprising “phenomena”. With such puzzling observations, I believe the machine learning of today should be treated as a natural science, rather than an engineering science, with many mysteries to be discovered and with many potential outcomes that might outreach its apparent scope.

ROYER Clément

Optimization

Dauphine - PSL

clement.royer [at] dauphine.psl.eu

Short bio

Clément Royer is an associate professor of computer science at Université Paris Dauphine-PSL and a researcher in the MILES team at LAMSADE. From 2016 to 2019, he was a postdoctoral research associate at the Wisconsin Institute of Discovery, University of Wisconsin-Madison, USA. He received his Ph.D. in applied mathematics from the University of Toulouse, France, in 2016. Clément is a recipient of the COAP Best Paper Prize for 2019.

Topics of interest

Numerical optimization, Optimization for machine learning, Randomized algorithms.

Project in Prairie

As the amount of data available and the complexity of the models keep increasing, a number of issues arise in deploying optimization techniques for artificial intelligence at scale. Such challenges have long been integrated in high-performance computing, where the combination of optimization with other fields from numerical linear algebra to differential equations has led to powerful algorithms. This project aims at adopting a similar approach with optimization methods for data science.

Quote

My research aims at developing optimization methods for artificial intelligence that leverage existing methodology and advances from scientific computing along two axes. On one hand, we motivate the use of standard algorithmic frameworks for scientific computing in modern learning tasks by proposing practical schemes with complexity guarantees. Our research will aim at analyzing the complexity of classical second-order methods used in scientific computing so as to design frameworks with theoretical grounds and practical appeal for artificial intelligence. On the other hand, we develop derivative-free algorithms for automated parameter tuning of complex data science models. Our setting will be that of expensive, black-box systems for which a number of parameters require calibration.

DO Virginie

PhD Student

Dauphine - PSL

virginie.do [at] dauphine.eu

Short bio

MSc in Applied Mathematics / Diplôme d’Ingénieur – Ecole Polytechnique

MSc in Social Data Science – University of Oxford

Thesis title

Fairness in machine learning: insights from social choice.

Short abstract

Designing fair algorithms has recently appeared as a major issue in machine learning, and more generally in AI, while it has been studied for long in economics, especially in social choice theory.  My goal is to bring together the notions of fairness of the two communities, and leverage the concepts and mathematical tools of social choice to address the new challenges of fairness in machine learning.

Mishra Shrey

PhD Student

L’Ecole normale supérieure - PSL

Shrey.Mishra [at] ens.fr

Short bio

Manipal University (India, BTech)

Cesi school of Engineering (Software majors, Ecole de engineer)

Munster Technological University (MSc Artificial Intelligence)

Thesis title

Extracting information related to the Scientific Articles published and making a knowledge base out of it, with the application of various AI / Machine learning based techniques.

Short abstract

Every years thousand’s of scientific papers are published in the academia covering various scientific proofs theorems and relations in a form of a Pdf document. I am enrolled in a TheoremKb (A project led by Pierre Senellart) to extract information from the scientific articles while training Machine learning models to identify / relate various documents together based upon the information expressed in the article (including the mathematical proof’s).

ZHOU Anqi

PhD Student

Institut Pasteur

anqi.zhou [at] pasteur.fr

Short bio

BSc. Applied Mathematics, BA. Neuroscience

MSc. Biotechnology, Brown University, USA

Thesis title

Rapidly identifying therapeutics of Alzheimer’s Disease using millions of Drosophila larvae and amortized inference.

Short abstract

Alzheimer’s Disease (AD) affects millions of people worldwide, yet the limited treatments address only the physiological symptoms instead of the cause of pathogenesis. The goal of this PhD project is to establish a new pipeline for measuring AD phenotypes that leverages the advantages of Drosophila as a model system for circuit studies and links probabilistic behavior to disease progression. The pipeline builds on automated machine learning to rapidly analyze data from millions of larvae.

d’ASCOLI Stéphane

PhD Student

L’Ecole normale supérieure - PSL / FAIR Paris

stephane.dascoli [at] gmail.com

Short bio

Master in Theoretical Physics, ENS Paris

Thesis title

Deep learning: from toy models to modern architectures.

Short abstract

My research focuses on understanding how deep neural networks are able to generalize despite being heavily overparametrized. On one hand, I use tools from statistical mechanics to study simple models, and try to understand when and why they overfit. On the other hand, I investigate how different types of inductive biases affect learning, from fully-connected networks to convolutional networks to transformers.

ABLIN Pierre

Postdoctoral Researcher

CNRS

pierreablin [at] gmail.com

Short bio

PhD Inria

Research project

Understanding neural networks with differential equations.

Short abstract

Neural networks have encountered great empirical success, yet the reasons behind this success are still mostly unknown. It has recently been proposed to draw bridges between neural networks and differential equations. I study the nature of this link, and its implications on the theoretical and practical properties of neural networks.