0% found this document useful (0 votes)
307 views2 pages

Undercomplete Autoencoder

An undercomplete autoencoder is a type of unsupervised neural network that reconstructs input images from a compressed bottleneck representation. It serves as a dimensionality reduction technique, capturing non-linearities in data and avoiding overfitting by ensuring the bottleneck size is smaller than the input size. This method generates a latent space that can be decompressed back into the original data when needed.

Uploaded by

akargutkar5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
307 views2 pages

Undercomplete Autoencoder

An undercomplete autoencoder is a type of unsupervised neural network that reconstructs input images from a compressed bottleneck representation. It serves as a dimensionality reduction technique, capturing non-linearities in data and avoiding overfitting by ensuring the bottleneck size is smaller than the input size. This method generates a latent space that can be decompressed back into the original data when needed.

Uploaded by

akargutkar5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

(ARTIFICIAL INTELLIGENCE & MACHINE LEARNING)

Undercomplete Autoencoder

• An undercomplete autoencoder is one of the simplest types of autoencoders.


Undercomplete autoencoder takes in an image and tries to predict the same image as output, thus
reconstructing the image from the compressed bottleneck region.
• Undercomplete autoencoders are truly unsupervised as they do not take any form of label, the
target being the same as the input.
• The primary use of autoencoders like such is the generation of the latent space or the bottleneck,
which forms a compressed substitute of the input data and can be easily decompressed back with
the help of the network when needed.
• This form of compression in the data can be modeled as a form of dimensionality reduction.
• In this case, we don’t have an explicit regularization mechanism, but we ensure that the size of
the bottleneck is always lower than the original input size to avoid overfitting.
• This type of configuration is typically used as a dimensionality reduction technique (more
powerful than PCA since its also able to capture non-linearities in the data).
• Under complete autoencoders is an unsupervised neural network that you can use to generate a
compressed version of the input data.
• It is done by taking in an image and trying to predict the same image as output, thus
reconstructing the image from its compressed bottleneck region.

Department of Computer Science & Engineering-(AI&ML) | APSIT


DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
(ARTIFICIAL INTELLIGENCE & MACHINE LEARNING)

Another example:

Department of Computer Science & Engineering-(AI&ML) | APSIT

You might also like