Natural Language Inference and Inference in Natural Language

Humans draw inferences from available information all the time, as an essential part of our decision-making process and in a manner that crucially informs and sometimes determines action.  This human ability has been carefully studied with modern methodologies within the social sciences (linguistics, philosophy, cognitive psychology) over the past 70 or so years.  More recently, research in AI has made it a priority to train natural language processing models to make inferences over naturalistic text in a way similar to humans.  First, because inference making is an essential aspect of what characterizes human linguistic and psychological competence, thus providing a crucial means of testing what our models have learned.  Second, because an ability to draw human-like inferences is crucial for effective machine-human interaction.  However, current work from within AI on inferences drawn by language models is making insufficient contact with the empirical discoveries and theoretical advances of the domains of the social sciences that specialize in describing and explaining the relevant human faculties.  This project proposes to advance our convergence between natural-language inference in AI and crucially related work in linguistics, philosophy, and cognitive psychology chiefly by investigating two points of contact: pragmatics and the notion of alternative possibilities. 

Pragmatics: Many instances of what looks like entailment in natural language are the result of a listener reasoning about the communicative intentions of the speaker (Grice, 1975).  Are NLI models sensitive to the particular properties of these kinds of inferences, as distinct from “regular” entailments, and as studied by linguists, philosophers, and psychologists? Can NLI models be made sensitive to these properties, through training guided by insights from the relevant social sciences disciplines? 

Alternative possibilities: Recent work at the intersection of the social sciences disciplines of interest has established that consideration of alternative possibilities, such as provided by a disjunctive sentence for example, underlies many instances of human fallacious reasoning (Koralus & Mascarenhas, 2013, 2018; Sablé-Meyer & Mascarenhas, 2021).  We will investigate whether and to what extent NLI models are sensitive to these kinds of phenomena, with a special focus on models whose central task is to predict contextualized words (e.g. BERT).

Collaborators in the project

  • Michael GOODALE — Research Assistant on the project
  • Justine CASSELL — PRAIRIE Chair