Colloquium PRAIRIE

Colloquium PRAIRIE

UPCOMING SEMINARS

4th Prairie seminar – 16 September 2020, at 11h CET (webinar)

Speaker: Éric Moulines, École Polytechnique

Title: “MCMC, Variational Inference, Invertible Flows… Bridging the gap?”

==== Résumé ====
Variational Autoencoders (VAE) — generative models combining variational inference and autoencoding — have found widespread applications to learn latent representations for high-dimensional observations. However, most VAEs, relying on simple mean-field variational distributions, usually suffer from somewhat limited expressiveness, which results in a poor approximation of the conditional latent distribution and in particular mode dropping. In this work, we propose Metropolized VAE (MetVAE), a VAE approach based on a new class of variational distributions enriched with Markov Chain Monte Carlo. We develop a specific instance of MetVAE with Hamiltonian Monte Carlo and demonstrate clear improvements of the latent distribution approximations at the cost of a moderate increase of the computational cost. We consider application to probabilistic collaborative filtering models, and numerical experiments on classical benchmarks support the performance of MetVAE.

Video

PAST SEMINARS

3rd Prairie seminar – 9 June 2020, at 14h CET (webinar)

Speaker: Alex Cristia, Laboratoire de Sciences Cognitives et Psycholinguistique, Département d’études cognitives ENS, EHESS, Centre National de la Recherche Scientifique PSL Research University, https://sites.google.com/site/acrsta/

Title: « Unsupervised learning of sounds and words: Is it easier from child-directed speech? »

==== Résumé ====
Developments in recent years have sometimes led to systems that can achieve super-human performance even in tasks previously thought to require human cognition. As of today, however, humans remain simply unsurpassable in the domain native language acquisition. Children routinely become fluent in one or more languages by about 4 years of age, after exposure to possibly as little as 500h, and maximally 8k hours of speech. In stark contrast, the best speech recognition and natural language processing systems on the market today require up to 100 times those quantities of input to achieve a level of performance that is substantially lower than that of humans, often having to employ at least some labeled data. It has been argued that infants’ acquisition is aided by cooperative tutors: Child-directed speech may be simplified in ways that boost learning. In this talk, I present results from several studies assessing the learnability of speech sounds and words from child- versus adult-directed speech. I demonstrate that learnability is increased in input to children only when we assume the learner has access to representations that abstract from the acoustic signal; when presented with acoustic speech features, however, learnability is lower for child- than adult-directed speech. These results suggest present-day machines are unlikely to benefit from infant-directed input, unless we improve our acoustic representations of speech.

Video

================================================

2nd Prairie seminar – 6 May 2020 (webinar)

Speaker: Marc Mézard, Ecole normale supérieure – Université PSL

Title: « L’éclairage de la physique statistique sur quelques questions d’apprentissage machine »

==== Résumé ====

Depuis plus de trente ans, il y a eu un certain nombre de tentatives pour utiliser des concepts et méthodes de physique statistique afin de développer un cadre théorique pour l’apprentissage machine, avec des succès mitigés. Cette direction de recherche a été revivifiée récemment, autour des questions ouvertes importantes posées dans le cadre des développements récents du « deep learning », notamment des questions liées à la dynamique des algorithmes d’apprentissage et à la structure des données.

Cet exposé présentera certains de ces développements récents, dans une perspective globale, en soulignant les forces et les faiblesses de telles approches.

Presentation

Video

================================================

1st Prairie seminar – 5 February 2020

Speaker: Jean François Cardoso, CNRS et Institut d’Astrophysique de Paris (http://www2.iap.fr/users/cardoso/)

Title: « Information geometry of Independent Component Analysis »

==== Résumé ====
Independent Component Analysis is an exploratory technique which, as its name implies, aims at decomposing a vector of observations into components which are statistically independent (or as independent as possible).  It has numerous applications, particularly in neurosciences for extracting brain sources from their observed mixtures collected on the scalp.

ICA goes well beyond PCA (Principal Component Analysis) because statistical independence is a much stronger property than mere decorrelation.  Of course, this program implies that an ICA method must use non Gaussian statistics in order to express independence (otherwise, independence would reduce decorrelation).

In this (non technical) seminar, I use a simple construction of Information Geometry (a Pythagorean theorem in distribution space) to elucidate the connections in ICA between the main players: correlation, independence, non Gaussianity, mutual information and entropy.

Presentation