Sciweavers

CORR
2016
Springer

Auxiliary Deep Generative Models

8 years 7 months ago
Auxiliary Deep Generative Models
Deep generative models based upon continuous variational distributions parameterized by deep networks give state-of-the-art performance. In this paper we propose a framework for extending the latent representation with extra auxiliary variables in order to make the variational distribution more expressive for semi-supervised learning. By utilizing the stochasticity of the auxiliary variable we demonstrate how to train discriminative classifiers resulting in state-of-the-art performance within semi-supervised learning exemplified by an 0.96% error on MNIST using 100 labeled data points. Furthermore we observe empirically that using auxiliary variables increases convergence speed suggesting that less expressive variational distributions, not only lead to looser bounds but also slower model training.
Lars Maaløe, Casper Kaae Sønderby, S
Added 01 Apr 2016
Updated 01 Apr 2016
Type Journal
Year 2016
Where CORR
Authors Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, Ole Winther
Comments (0)