Optimization of Deep Generative Models


Problem of Interest


Our Method & Results


Figure 1. The architecture of Multi-adversarial Autoencoder (MAAE). (top) A standard autoencoder to reconstruct data from a latent vector via encoder and decoder; (bottom) Multiple discriminators that are trained to distinguish a latent sample from the encoder and a prior, providing soft-ensemble feedback to the encoder to find well-matched variational posterior.

Figure 1. The architecture of Multi-adversarial Autoencoder (MAAE). (top) A standard autoencoder to reconstruct data from a latent vector via encoder and decoder; (bottom) Multiple discriminators that are trained to distinguish a latent sample from the encoder and a prior, providing soft-ensemble feedback to the encoder to find well-matched variational posterior.

Figure 2. Illustration of latent distributions on test data, trained by various VAE-based models and ours (MAAE). MAAE (f) demonstrates a latent space relatively closer to the prior (a), showing better inference quality. In (g), a lower error rate on a semi-supervised task applied to the learned latent vectors indicates that MAAE obtains meaningful and informative learned representations.

Figure 2. Illustration of latent distributions on test data, trained by various VAE-based models and ours (MAAE). MAAE (f) demonstrates a latent space relatively closer to the prior (a), showing better inference quality. In (g), a lower error rate on a semi-supervised task applied to the learned latent vectors indicates that MAAE obtains meaningful and informative learned representations.

Publications & Github