About 135,000 results
Open links in new tab
  1. Transformer vs Autoencoder: Decoding Machine Learning …

    Sep 8, 2023 · Autoencoders use a simple 3-layer architecture in which the output units are directly connected back to the input units, whereas Transformers use multiple layers of self-attention …

  2. Exploring Neural Network Architectures: Autoencoders, Encoder

    Apr 4, 2023 · Transformers bridge the gap between autoencoders and encoder-decoder architectures by offering a versatile and powerful approach to various AI tasks. They can be …

  3. Transformer (deep learning architecture) - Wikipedia

    Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a …

  4. Autoencoders in Machine Learning - GeeksforGeeks

    Mar 1, 2025 · The architecture of an autoencoder consists of three main components: the Encoder, the Bottleneck (Latent Space) and the Decoder. Let's deep dive into each part to …

    Missing:

    • Transformer Architecture
  5. Transformer-Based Autoencoders Overview - apxml.com

    Briefly introduce the application of Transformer architectures in autoencoding tasks (e.g., MAE concept).

  6. 8 Representation Learning (Autoencoders) – 6.390 - Intro to …

    Another influential encoder-decoder architecture is the Transformer, covered in Chapter 9. Transformers consist of multiple encoder and decoder layers combined with self-attention …

  7. Introduction to autoencoders. - Jeremy Jordan

    Mar 19, 2018 · In this post, I'll discuss some of the standard autoencoder architectures for imposing these two constraints and tuning the trade-off; in a follow-up post I'll discuss …

  8. machine learning - Variational Autoencoders VS Transformers

    Jan 8, 2022 · VAE is an autoencoder whose encodings distribution is regularised during the training in order to ensure that its latent space has good properties allowing us to generate …

  9. 9 Autoencoders – 6.390 - Intro to Machine Learning

    We seek to learn an autoencoder that will output a new dataset \(\mathcal{D}_{out} = \{a^{(1)}, \ldots, a^{(n)}\}\), where \(a^{(i)}\in \mathbb{R}^k\) with \(k < d\). We can think about \(a^{(i)}\) …

  10. Transformer-based Encoder-Decoder Models - Hugging Face

    Let's visualize the complete process of auto-regressive generation of transformer-based encoder-decoder models. The transformer-based encoder is colored in green and the transformer …

Refresh