
Addressing posterior collapse by splitting decoders in variational ...
Feb 14, 2024 · We propose a novel variational recurrent autoencoder model, named BVRNN, 1 to alleviate the posterior collapse problem. The new model uses auxiliary decoders to force latent variables to encode knowledge about the future step.
Scale-VAE: Preventing Posterior Collapse in Variational Autoencoder ...
2 days ago · However, when employing a strong autoregressive generation network, VAE tends to converge to a degenerate local optimum known as posterior collapse. In this paper, we propose a model named Scale-VAE to solve this problem.
Beyond Vanilla Variational Autoencoders: Detecting Posterior Collapse ...
Jun 8, 2023 · In this work, we advance the theoretical understanding of posterior collapse to two important and prevalent yet less studied classes of VAE: conditional VAE and hierarchical VAE.
Exploring Social Posterior Collapse in Variational Autoencoder …
Dec 1, 2021 · In this work, we argue that one of the typical formulations of VAEs in multi-agent modeling suffers from an issue we refer to as social posterior collapse, i.e., the model is prone to ignoring historical social context when predicting the future trajectory of an agent.
Posterior collapse in Variational Autoencoders (VAEs) arises when the variational posterior distribution closely matches the prior for a subset of latent variables. This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA).
Preventing posterior collapse in variational autoencoders for …
Oct 28, 2021 · Variational autoencoders trained to minimize the reconstruction error are sensitive to the posterior collapse problem, that is the proposal posterior distribution is always equal to the prior. We propose a novel regularization method based …
Preventing Posterior Collapse with DVAE for Text Modeling - MDPI
Apr 14, 2025 · Experimental results show the excellent performance of DVAE in density estimation, representation learning, and text generation. 1. Introduction. Variational autoencoder (VAE) [1, 2] is a widely used generative framework that combines deep latent variable models with amortized variational inference techniques.
Posterior collapse is a pervasive issue in Variational Autoencoders (VAEs) that leads to the learned latent representations becoming trivial and devoid of meaningful information.
h a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencod.
We propose a class of latent-identifiable variational autoencoders (LIDVAE) via Brenier maps to resolve latent variable non-identifiability and mitigate posterior collapse. Identifiability used to be mostly of theoretical interest, but it turns out to have important practical implications in modern machine learning. fθ(zi)) .
- Some results have been removed