0% found this document useful (0 votes)
40 views2 pages

The Different Deep Learning Algorithms

This article describes several deep learning algorithms such as convolutional neural networks, recurrent networks, LSTM, GAN, and RBM, along with their respective applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views2 pages

The Different Deep Learning Algorithms

This article describes several deep learning algorithms such as convolutional neural networks, recurrent networks, LSTM, GAN, and RBM, along with their respective applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

The different Deep Learning algorithms

Several types of algorithms are used in deep learning. Each algorithm


has its own specificities and applications.

Convolutional Neural Networks (CNN)


Also called ConvNets, CNNs consist of a multitude of layers responsible for
process and extract features from the data. Specifically, networks
Convolutional neurons are used for analysis and object detection. They can therefore serve
for example to recognize satellite images, process medical images, detect
anomalies or predicting time series.

Recurrent Neural Networks (RNN)


Recurrent neural networks have connections that form directed cycles.
This allows the outputs of the LSTM to be used as inputs at the level of the phase.
current. The output of the LSTM becomes an input for the current phase. It can therefore
memorize previous entries using their internal memory. In practice, RNNs
are used for image captioning, natural language processing, and translation
automatic.

Radial Basis Function Networks (RBFN)


These algorithms are quite particular feedforward neural networks. They exploit
basic radial functions as activation functions. They consist of a layer
input, a hidden layer, and an output layer. Generally, RBFNs are used
in classification, time series prediction, and linear regression.

Long Short-Term Memory Networks (LSTM)


LSTMs are derivatives of RNNs. They can learn and memorize dependencies on
a long duration. LSTMs thus retain the information stored over the long term. They
are particularly useful for predicting time series, as they remember the
previous entries. Besides this use case, LSTMs are also used to compose
musical notes and recognize voices.

Generative Adversarial Networks (GAN)


GANs create new instances of data that resemble the data.
deep learning. They have two main components: a generator and a
discriminator. If the generator learns to produce false information, the discriminator,
as for him, he learns to exploit this false information. GANs are generally used
by video game creators to improve 2D textures.

Restricted Boltzmann Machines (RBM)


It is Professor Geoffrey Hinton who developed this algorithm. In other words, the
restricted Boltzmann machines are stochastic neural networks consisting of
two layers: visible units and hidden units. These artificial networks are capable of
to learn starting from a probability distribution over a set of inputs. Nevertheless, it
It is important to emphasize that the list of algorithms presented above is not exhaustive.
There are indeed other types such as:

autoencoders,
deep belief networks (DBN)

multilayer perceptrons (MLP).

You might also like