Master 2 – Mathématiques, Vision & Apprentissage – ENS Paris-Saclay
Ingénieur Civil des Mines de Paris – Mines ParisTech
Exploration of the latent space modelling of Variational Autoencoders.
In the medical field, the lack of data resulting in low patient variability remains a key issue. For example, in neuroscience, practitioners have to deal with potentially very high-dimensional data combined with a very small number of samples. Generative models such as variational autoencoders (VAE) may reveal particularly well suited to perform dimension reduction on such data and their generative capacity could be used for data augmentation. Unfortunately, VAEs perform weakly when trained on a (very) small number of data and the underlying structure of the latent space remains poorly understood. We believe that further investigating the latent space geometry of these generative models could allow us to overcome these issues.