0% found this document useful (0 votes)
45 views15 pages

VAE and Domain Adaptation Explained

The document discusses variational autoencoders (VAEs) and domain adaptation. It provides the following key points: 1) VAEs enforce conditions on the latent variable to generate meaningful outputs from random latent vectors, whereas autoencoders do not regularize the latent space. 2) VAEs use the reparameterization trick during backpropagation to estimate gradients through stochastic nodes, allowing the latent space to be learned. 3) Domain adaptation techniques allow models trained on one domain or dataset to be adapted to a new target domain without retraining from scratch, by leveraging knowledge gained from the source domain. This saves resources compared to fully retraining models for new domains.

Uploaded by

Niteesh Krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views15 pages

VAE and Domain Adaptation Explained

The document discusses variational autoencoders (VAEs) and domain adaptation. It provides the following key points: 1) VAEs enforce conditions on the latent variable to generate meaningful outputs from random latent vectors, whereas autoencoders do not regularize the latent space. 2) VAEs use the reparameterization trick during backpropagation to estimate gradients through stochastic nodes, allowing the latent space to be learned. 3) Domain adaptation techniques allow models trained on one domain or dataset to be adapted to a new target domain without retraining from scratch, by leveraging knowledge gained from the source domain. This saves resources compared to fully retraining models for new domains.

Uploaded by

Niteesh Krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

VAE,Domain adaptation

Date:16.11.2022
VAE in
nutshell
Difference between autoencoder and variational autoencoder

Autoencoder (AE)
· Used to generate a compressed transformation of input in a latent space
· The latent variable is not regularized
· Picking a random latent variable will generate garbage output
· The latent variable has a discontinuity
· Latent variable is deterministic values
· The latent space lacks the generative capability
Variational Autoencoder (VAE)
· Enforces conditions on the latent variable to be the unit norm
· The latent variable in the compressed form is mean and variance
· The latent variable is smooth and continuous
· A random value of latent variable generates meaningful output at the decoder
· The input of the decoder is stochastic and is sampled from a gaussian with mean and variance of the output of the encoder.
· Regularized latent space
Variational autoencoder Latent
· The latent space has generative capabilities.
space visualization :
https://youtu.be/sV2FOdGqlX0
Block diagram of VAE
Reparameterization in VAE
The latent vector is sampled from the encoder-generated distribution before feeding it to the decoder. This random
sampling makes it difficult for backpropagation to happen for the encoder since we can’t trace back errors due to this
random sampling.

Hence reparameterization trick is used to model the sampling process which remove the randomness and makes it
possible for the errors to propagate through the network. If z is N(μ(x_i ), Σ(x_i )), then we can sample z using
z=μ(x_i )+√(Σ(x_i)) ϵ, where ϵ is N(0,1). So we can draw samples from N(0,1), which doesn’t depend on the parameters.
With and without reparameterization diagram
m=mean,s=std,N(0,I) =normal distribution

With reparameterization
With reparameterization
https://youtu.be/YV9D3TWY5Zo —-VAE (17min video, watch this video and note
down important points)
VAE steps/Algorithm

=probability of code/latent vector output(z) given input xi

=probability of reconstructed output(x) at decoder given latent vector (z)


Purpose of different z in VAE

From this example you can see that long hair is added to all the images, which means this sampled z
has the property to add long hair to the image.
Similarly different sampled z vector add different properties to the image.
Loss function in VAE
Loss function = Reconstruction loss + KL divergence Loss
•For a single data point xi we get the loss function

•The first term promotes recovery of the input.


•The second term keeps the encoding continuous – the encoding is compared to a fixed p(z) regardless
of the input, which inhibits memorization. In other words, it involves KL loss :kullback leibler function is
the distance between learned distribution and standard normal distribution
Domain Adaptation in Deep learning
Need of Domain Adaptation
Firstly, neural networks require a lot of labeled data for training. Manually annotating it is a laborious task. Secondly, a trained deep

learning model performs well on test data only if it comes from the same data distribution as the training data. A dataset created by

photos taken on a mobile phone has a significantly different distribution than a high-end DSLR camera. Traditional Transfer Learning

methods fail.

Thus, for every new dataset, we first need to annotate the samples and then re-train the deep learning model to adapt to the new data.

Training a sizable Deep Learning model with datasets as big as the ImageNet dataset even once takes a lot of computational power

(model training may go on for weeks), and training them again is infeasible.

Domain Adaptation is a method that tries to address this problem. Using domain adaptation, a model trained on one dataset does not

need to be re-trained on a new dataset. Instead, the pre-trained model can be adjusted to give optimal performance on this new data.

This saves a lot of computational resources, and in techniques like unsupervised domain adaptations, the new data does not need to be

labeled.
Definition
Domain Adaptation is a technique to improve the performance of a model on a target domain containing
insufficient annotated data by using the knowledge learned by the model from another related domain with
adequate labeled data.
Simple example of domain adaptation
Thank you Everyone!!!

You might also like