QUENNELLE Sophie

PhD student

Hôpital Necker-Enfants Malades

sophie.quennelle [at] protonmail.com

Short bio

  • Master 2 – Informatique biomédicale, Sorbonne Université, Paris
  • Docteur en cardiologie – Université Paris Cité

Thesis title

Deep representation of the patient’s electronic health record for clinical event prediction and patient similarity.

Short abstract

Pediatric cardiologist at Necker-Enfants Malades in Paris interested in health data extraction and reuse for clinical research. Her PhD project started in October 2020 supervised by Pr. Anita Burgun and co-supervised by Dr. Antoine Neuraz. Its objective is to propose a deep learning model to provide a reliable representation of the patient electronic health record.

GILMARTIN Emer

Postdoctoral researcher

Inria

emer.gilmartin [at] inria.fr

Short bio

  • Ph.D, Trinity College Dublin, Ireland. M.Phil, Trinity College Dublin, Ireland
  • B.E (Mech), NUIG, Ireland

Short abstract of the research project

We are working with groups in Korea to understand and model the effects of interlocutor personality in dialogue. We are building a new model of ‘interpersonality’, how personality related behaviours of each participant in a conversation affect the conversation as a whole.

Roman Castagné / Doctorant - équipe ALMANACH / Centre Inria de Paris

CASTAGNÉ Roman

PhD student

Inria

roman.castagne [at] gmail.com

Short bio

  • MVA Master’s Degree from ENS Paris-Saclay
  • Engineering Master’s Degree from Ecole des Ponts

Thesis title

Life of a Language Model

ABADJI Julien

Engineer

Inria

julien.abadji [at] inria.fr

Short bio

Master’s degree at CY University (prev. Université de Cergy-Pontoise)

Research project

OSCAR Project/Corpus

Short abstract

OSCAR is an open source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications.

NISHIMWE Lydia

Inria

lydia.nishimwe [at] inria.fr

Short bio

  • Bachelor of Science in Mathematics and Computer Science, Université Grenoble Alpes
  • Master of Engineering in Mathematics and Computer Science, École Centrale de Nantes

Thesis topic

Robust Neural Machine Translation.

Short abstract

Neural machine translation models struggle to translate texts that differ from the “standard” data commonly used to train them. In particular, social media texts pose many challenges because they tend to be “noisy”: non-standard use of spelling, grammar and vocabulary; typographical errors; use of emojis, hashtags and at-mentions; etc. I aim to develop new methods to better translate these texts.

LE PRIOL Emma

Université Paris Cité and Kap Code

emmalepriol [at] gmail.com

Short bio

Master’s degree : mathematics (Université Paris – Dauphine) and social sciences (Sciences Po Paris)

Thesis topic

Using NLP to leverage social media data in the study of rare diseases.

Short abstract

My PhD thesis aims at exploring NLP techniques to study the online contents from rare diseases’ patients or their caregivers.  The first goal is to better understand natural histories of the studied diseases, and compare the spontaneously reported symptoms to symptoms collected during medical interviews. The other goal is to study how patients’ associations become invested in public policy governance, in particular by acquiring a vast knowledge collectively.

GODEY Nathan

PhD student

Inria

nathan.godey [at] inria.fr

Short bio

Masters of Engineering, Ecole des Ponts

Thesis topic

Cheap and expressive neural contextual representations for textual data.

Short abstract

Neural language models are pre-trained using self-supervised learning to produce contextual representations of text data like words or sentences. These representations are shaped by the pre-training procedure: its data, its task, its optimization scheme, among others. I aim at identifying ways to improve the quality of the text representations by leveraging new pre-training approaches, in order to reduce data and/or compute requirements without quality loss.

DO Salomé

PhD student

École normale supérieure - PSL

salome.do [at] ens.psl.eu

Short bio

MSc / Engineering degree at ENSAE IP Paris

Thesis topic

Computational Content Analysis Methods for News Frames Prevalence Estimation in the Political Press.

Short abstract

This dissertation aims at providing Computational Content Analysis (CCA) methods for the analysis of News Framing in the political press. First, it aims at creating a french corpus of political press articles and providing human annotations for two news frames identification tasks, derived from the literature on strategic news framing and “horse race” journalism. Second, it aims at exploring the modalities (frame complexity, data quantity and data quality) in which Supervised Machine Learning (SML) methods can “augment” social scientists, i.e. train a model to generalize social scientists’ content analysis (CA) codebook (and subsequent text annotations) so that billions of articles can be analyzed instead of a few hundred. Third, the dissertation aims at evaluating the potential benefits of CCA over CA when it comes to estimating news frames prevalences in a corpus. What justifies using CCA over CA, and is it always justified? I will try to define the conditions on SML models performances under which news frames prevalence estimates are more accurate with CCA than CA.

POURNAKI Armin

PhD Student

ENS & PSL

pournaki [at] mis.mpg.de

Short bio

Master’s degree in Theoretical Physics, 2021, Technical University Berlin

Thesis title

Analysing discourse and semantics through geometric representations.

Short abstract

I explore geometric approaches to language and discourse analysis. Currently, I work on combining methods from network science and natural language processing to gain insights on the mechanisms behind information and knowledge spreading related to climate change.

OLYMPUS DIGITAL CAMERA

BAWDEN Rachel

Inria

rachel.bawden [at] inria.fr

Short bio

Researcher (Chargée de recherches) at Inria in the ALMAnaCH project-team since 2020. Previously obtained a PhD from Université Paris-Sud (awarded the ATALA thesis prize) and spent 2 years as a postdoc in the Machine Translation group at the University of Edinburgh.

Topics of interest

Natural language processing, multilinguality, machine translation

Project in Prairie

Rachel Bawden will focus on improving Machine Translation in the face of language variation (texts from different domains, user-generated texts and historical language). Alongside the development of models, she will also explore the interpretability of models in a bid to make them more robust to variation. Finally, she will experiment with the integration of other input modalities (e.g. image and video data), to help tackle ambiguity and scenarios for which the input signal is impoverished or incomplete.

Quote

Huge progress has been seen in Machine Translation in recent years. However, the translation of domain-specific texts (e.g. biomedical and financial), those displaying a high degree of language variation (e.g. social media texts containing spelling errors, acronyms and marks of expressiveness) and other non-standard varieties of language (including dialects and old languages) remains a challenge. Developing models that (i) are robust to variation, (ii) are able to handle the low-resource settings that these scenarios often present and (iii) can incorporate all external context is therefore fundamental to progress in Machine Translation.

Team

NISHIMWE Lydia
NISHIMWE Lydia

PhD student


LASRI Karim

PhD Student

L’Ecole normale supérieure - PSL

karim.lasri [at] ens.fr

Short bio

Engineer’s degree at Ecole CentraleSupélec (former Ecole Centrale Paris)

Master’s degree in Cognitive Science at the Ecole Normale Supérieure

Thesis title

Linguistic generalization in transformer-based neural language models.

Short abstract

Transformer-based neural architectures bear lots of promises as they seem to address a wide range of linguistic tasks after learning a language model. However, the level of abstraction they reach after their training is still opaque. My main research focus is understanding better how neural language models generalize. What linguistic properties do these architectures acquire during learning ? How is linguistic information encoded in their intermediate representation spaces?

Benoit Sagot

SAGOT Benoît

benoit.sagot [at] inria.fr

Short bio

Research Director at Inria, head of the ALPAGE (2014-2016) and ALMAnaCH (2017-) teams. Co-founder of the Verbatim Analysis (2009-) and opensquare (2016-) Inria start-ups.

Topics of interest

Computational linguistics, Natural Language Processing (NLP), NLP applications.

Project in Prairie

Benoît Sagot will focus on improving and better understanding neural approaches to NLP and integrating linguistic and extra-linguistic contextual information. He will study how non-neural approaches and language resources can contribute to improving neural NLP systems in low-resource and non-edited scenarios. Applications, both academic and industrial, will include computational linguistics and sociolinguistics, opinion mining in survey results, NLP for financial and historical documents, and text simplification to help people with disabilities.

Quote

Most current research in NLP focuses on neural architectures that rely on
large volumes of data, in the form of both raw text and costly annotated corpora. The increasing amount of data necessary to train such models is not available for all languages and can require massive computational resources. Moreover, these approaches are highly sensitive to language variation, illustrated for instance by domain-specific texts, historical documents and non-edited content as found on social media. To address these issues and allow for a wider deployment of NLP technologies, this bottleneck must be overcome. This will require new models that better exploit the complex structure of language and the context in which it is used.

Team

ABADJI Julien
ABADJI Julien
Engineer

GODEY Nathan
GODEY Nathan
PhD student

CASTAGNÉ Roman
CASTAGNÉ Roman
PhD student

Thierry Poibeau

POIBEAU Thierry

Natural Language Processing, Digital Humanities

thierry.poibeau [at] ens.fr

Short bio

CNRS Research Director, Head of the CNRS Lattice research unit (2012-2018) and adjunct head since 2019. Affiliated lecturer, Language Technology Laboratory, U. of Cambridge since 2009. Rutherford fellowship, Turing institute, London, 2018-2019. Teaching NLP in the PSL Master in Digital Humanities.

Topics of interest

Computational linguistics, Low resource languages, Corpora, Distant reading, AI and creativity

Project in Prairie

Thierry Poibeau’s work focuses on Natural Language Processing. He is especially interested in developing techniques for low resource languages that have largely been left out of the machine learning revolution. He is also interested in applying AI techniques to the study of literature and social sciences, shedding new light on the notions of culture and creativity.

Quote

Natural Language Processing (NLP) has made considerable progress over the last few years, mainly due to impressive advances in machine learning. We have now efficient and accurate tools for 20+ languages, but the vast majority of the world languages lack the resources for state-of-the-art NLP. This is a major challenge for our field, since preserving language and cultural diversity is as important as preserving bio-diversity. Technology is not the only solution, but it helps facilitate this process by leveraging resources, bridging the gap between languages, and enhancing our understanding of culture and society.

Team

DO Salomé
DO Salomé

PhD student


POURNAKI Armin
POURNAKI Armin

PhD student


ELAMRANI Aïda
ELAMRANI Aïda

Postdoc