theophile.wallez [at] inria.fr
Master of Computer Science, ENS Ulm
A verification framework for privacy-preserving machine learning
Machine learning is known to be hungry for data, which is often private. Recent advances in privacy-preserving machine learning use new cryptographic techniques to avoid exposing private data. However, such cryptographic implementations are error-prone, resulting in information leakage. Therefore, I use the F* software verifier to implement modern multiparty computation protocols, such as SPDZ2k.
Master, Sorbonne University
NLP for low-resource, non-standardised language varieties, especially North-African dialectal Arabic written in Latin script.
DI FOLCO Cécile
cecile.difolco [at] icm-institute.org
Engineer diploma (AGROPARISTECH)
Master of Data science («INFORMATIQUE: SYSTEME INTELLIGENTS»-UNIVERSITE PARIS-DAUPHINE)
Master of Cognitive Sciences (ENS, UNIVERSITE DE PARIS, EHESS)
Modelling neurodegenerative diseases.
I study the modeling of neurodegenerative diseases’ progression using imaging and clinical data. In particular, I investigate the influence of various cofactors, including genetics, on Parkison’s Disease progression.
quentin.burthier [at] inria.fr
Diplôme d’Ingénieur (ENSTA Paris)
Master 2 MVA (ENS Paris-Saclay)
Contextual Machine Translation of User Generated Contents.
Machine Translation has seen huge progress in the last years, mainly thanks to deep learning. However, machine translation systems dramatically fail to translate noisy texts generated by users (containing grammar and syntax errors, emojis, jargon…) and to handle contextual information. We aim to investigate the cases of failure and to build systems both robust to noise and able to leverage various contextual sources to translate ambiguous sentences correctly.
florian.yger [at] dauphine.fr
Associate professor at Université Paris-Dauphine since 2015. JSPS fellow in the laboratory of Pr. Sugiyama (from 2014 to 2015) and visiting researcher, RIKEN AIP (summer 2017-).
Topics of interest
Trustworthy machine learning, Causal inference, interpretable AI
Project in Prairie
Florian Yger will address the questions of trust, explainability and interpretability in machine learning models (including deep learning) with a focus on the robustness to adversarial examples and counterfactual reasoning on data. This project has natural and practical applications in the medical field.
In the last decade, deep learning has made possible breakthrouhgts in several domains (e.g. computer vision, machine translation, games, …). Yet those hardly interpretable algorithms are fed with huge amounts of -sometimes sensitive- data and can suffer from malicious attacks: attacks on the privacy of the data and attacks on the robustness where adversarial examples are generated to fool the algorithm. This is a critical issue (especially in medical applications) and we feel that an effort toward a deeper theoretical analysis is needed.