WALLEZ Théophile

Research Engineer

INRIA

theophile.wallez [at] inria.fr

Short bio

Master of Computer Science, ENS Ulm

Research project

A verification framework for privacy-preserving machine learning

Short abstract

Machine learning is known to be hungry for data, which is often private. Recent advances in privacy-preserving machine learning use new cryptographic techniques to avoid exposing private data. However, such cryptographic implementations are error-prone, resulting in information leakage. Therefore, I use the F* software verifier to implement modern multiparty computation protocols, such as SPDZ2k.

RIABI Arij

Research Engineer

INRIA

arij.riabi [at] inria.fr

Short bio

Master,  Sorbonne University

Research project

NLP for low-resource, non-standardised language varieties, especially North-African dialectal Arabic written in Latin script.

Short abstract

DI FOLCO Cécile

Research Engineer

ICM Institute

cecile.difolco [at] icm-institute.org

Short bio

Engineer diploma (AGROPARISTECH)

Master of Data science («INFORMATIQUE: SYSTEME INTELLIGENTS»-UNIVERSITE PARIS-DAUPHINE)

Master of Cognitive Sciences (ENS, UNIVERSITE DE PARIS, EHESS)

Research project

Modelling neurodegenerative diseases.

Short abstract

I study the modeling of neurodegenerative diseases’ progression using imaging and clinical data. In particular, I investigate the influence of various cofactors, including genetics, on Parkison’s Disease progression.

BURTHIER Quentin

Engineer

INRIA

quentin.burthier [at] inria.fr

Short bio

Diplôme d’Ingénieur (ENSTA Paris)

Master 2 MVA (ENS Paris-Saclay)

Thesis title

Contextual Machine Translation of User Generated Contents.

Short abstract

Machine Translation has seen huge progress in the last  years, mainly thanks to deep learning. However, machine translation systems dramatically fail to translate noisy texts generated by users (containing grammar and syntax errors, emojis, jargon…) and to handle contextual information. We aim to investigate the cases of failure and to build systems both robust to noise and able to leverage various contextual sources to translate ambiguous sentences correctly.  

YGER Florian

Machine learning

florian.yger [at] dauphine.fr

Florian Yger

Short bio

Associate professor at Université Paris-Dauphine since 2015. JSPS fellow in the laboratory of Pr. Sugiyama (from 2014 to 2015) and visiting researcher, RIKEN AIP (summer 2017-).

Topics of interest

Trustworthy machine learning, Causal inference, interpretable AI

Project in Prairie

Florian Yger will address the questions of trust, explainability and interpretability in machine learning models (including deep learning) with a focus on the robustness to adversarial examples and counterfactual reasoning on data. This project has natural and practical applications in the medical field.

Quote

In the last decade, deep learning has made possible breakthrouhgts in several domains (e.g. computer vision, machine translation, games, …). Yet those hardly interpretable algorithms are fed with huge amounts of -sometimes sensitive- data and can suffer from malicious attacks: attacks on the privacy of the data and attacks on the robustness where adversarial examples are generated to fool the algorithm. This is a critical issue (especially in medical applications) and we feel that an effort toward a deeper theoretical analysis is needed.