asma.nouira [at] mines-paristech.fr
Master degree, National Engineering School of Sousse, Tunisia
Engineer degree, National Engineering School of Sousse, Tunisia
Stable feature selection in multi-locus Genome Wide Association Studies.
Our main goal is to provide a stable framework in Genome Wide Association Studies using Machine Learning, essentially feature selection models to deal with high-dimensional data. Many challenges lay ahead such as: genetic population stratification, linkage disequilibrium patterns clustering, the stability of the selection and the computational complexity. We aim to solve these issues by developing efficient algorithms applied to real data in case control studies such as breast cancer disease.
florentin.goyens [at] dauphine.psl.eu
PhD in mathematics at the University of Oxford
Most of my research is related to continuous nonconvex optimization. I am particularly interested in optimization problems with constraints, such as smooth manifolds; and second-order methods. I consider applications in numerical analysis and machine learning.
Université de Paris / Inria / Inserm / Implicity (CIFRE thesis)
louis.vincent [at] implicity.fr
Master 2 – Mathématiques, Vision & Apprentissage (ENS Paris-Saclay),
Master 2 – Statistiques (Sorbonne Université – Campus Pierre et Marie Curie)
Longitudinal data encoding applied to medical decision support in telecardiology.
In telecardiology as in many other fields of modern medicine, we have at our disposal large amounts of data explaining the evolution of a patient. These data can often be missing or corrupted, and data from several sources can sometimes be of different nature, which makes their exploitation difficult.
My goal is to develop a model capable of synthesizing different types of temporal data via auto-encoders to infer the state of a patient. In the context of tele-cardiology, this could for instance allow us to predict deteriorations of a patient’s health status, and thus anticipate and prevent more serious complications.
konsta.mish [at] gmail.com
PhD from KAUST, supervised by Peter Richtarik
Optimization for machine learning.
I design new optimization algorithms for machine learning and study their convergence. I am particularly interested in stochastic methods, adaptivity, distributed training, and federated learning.
Université Paris Dauphine-PSL
tambysatya [at] gmail.com
PhD, Paris Dauphine-PSL
Discrete optimization using machine learning.
Discrete optimization is a very efficient approach to solve decision problems but is extreamely costly in general. We are trying to use machine learning techniques as a heuristic to guide the exploration of the search space.
Université Paris Dauphine-PSL
quentin.cohen-solal [at] dauphine.psl.eu
PhD at the University of Caen
Reinforcement learning in games.
This postdoc focuses on the study and improvement of learning and planning algorithms in games.
mathieu.even [at] inria.fr
On Federated and Distributed Learning Problems.
We study Federated and Distributed learning problems, with a strong incentive on theoretical guarantees. A (possibly large) number of agents aim at making predictions (supervised or unsupervised learning). To what extent can they benefit from collaborating with each other, depending on the learning problem and the communication constraints ?
amaury.triboulin [at] inria.fr
Master’s Degree at Ecole normale supérieure
Symmetries in Machine Learning for Structured Data.
In this thesis, we will consider high-dimensional problems with an additional structure that comes from the geometry of the input signal and explore ways to incorporate this geometric structure into the learning algorithms. We have already started to investigate new architectures based on equivariant layers which we tested on combinatorial optimization problems and showed that it is possible learn representations of hard (typically NP-hard) problems. We believe this could lead to new algorithms, less resource-dependant, for learning efficient heuristics for practical instances.
umut.simsekli [at] inria.fr
I am a Research Faculty (Chargé de Recherche) at INRIA – SIERRA team and École Normale Supérieure, Computer Science Department. I received my PhD from Boğaziçi University, Computer Engineering Department in 2015. During 2016-2020, I was an associate professor at Télécom Paris, Institut Polytechnique de Paris and I spent one year as a visiting faculty at the University of Oxford, Department of Statistics during 2019-2020.
Topics of interest
Theory of Deep Learning, Markov Chain Monte Carlo.
Project in Prairie
My main research area is machine learning, including theory, algorithms, and applications. My ultimate goal has been to develop theory that closely follows practice and leads to methods that have impact on real problems. In particular, I have been interested in (i) analyzing deep learning methods from the lens of heavy-tailed dynamical systems theory, and (ii) implicit generative modeling algorithms by using tools from computational optimal transport.
Machine learning is a fascinating field, which continuously generates exciting and quite nontrivial theoretical, practical, and even societal/ethical questions. Attacking the questions from all these aspects simultaneously, the partial solutions offered by machine learning researchers have only posed additional questions and revealed many interesting and sometimes surprising “phenomena”. With such puzzling observations, I believe the machine learning of today should be treated as a natural science, rather than an engineering science, with many mysteries to be discovered and with many potential outcomes that might outreach its apparent scope.
clement.royer [at] dauphine.psl.eu
Clément Royer is an associate professor of computer science at Université Paris Dauphine-PSL and a researcher in the MILES team at LAMSADE. From 2016 to 2019, he was a postdoctoral research associate at the Wisconsin Institute of Discovery, University of Wisconsin-Madison, USA. He received his Ph.D. in applied mathematics from the University of Toulouse, France, in 2016. Clément is a recipient of the COAP Best Paper Prize for 2019.
Topics of interest
Numerical optimization, Optimization for machine learning, Randomized algorithms.
Project in Prairie
As the amount of data available and the complexity of the models keep increasing, a number of issues arise in deploying optimization techniques for artificial intelligence at scale. Such challenges have long been integrated in high-performance computing, where the combination of optimization with other fields from numerical linear algebra to differential equations has led to powerful algorithms. This project aims at adopting a similar approach with optimization methods for data science.
My research aims at developing optimization methods for artificial intelligence that leverage existing methodology and advances from scientific computing along two axes. On one hand, we motivate the use of standard algorithmic frameworks for scientific computing in modern learning tasks by proposing practical schemes with complexity guarantees. Our research will aim at analyzing the complexity of classical second-order methods used in scientific computing so as to design frameworks with theoretical grounds and practical appeal for artificial intelligence. On the other hand, we develop derivative-free algorithms for automated parameter tuning of complex data science models. Our setting will be that of expensive, black-box systems for which a number of parameters require calibration.