0% found this document useful (0 votes)
74 views4 pages

DLT Unit 1

The document provides an overview of deep learning, its relationship with artificial intelligence (AI) and machine learning, and the architecture of artificial neural networks. It highlights the differences between machine learning and deep learning, types of neural networks, and core concepts in AI, including natural language processing and robotics. Additionally, it explains representation learning, emphasizing its importance in transforming raw data into useful features for predictive modeling.

Uploaded by

syedibrahim36258
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views4 pages

DLT Unit 1

The document provides an overview of deep learning, its relationship with artificial intelligence (AI) and machine learning, and the architecture of artificial neural networks. It highlights the differences between machine learning and deep learning, types of neural networks, and core concepts in AI, including natural language processing and robotics. Additionally, it explains representation learning, emphasizing its importance in transforming raw data into useful features for predictive modeling.

Uploaded by

syedibrahim36258
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit-I

Deep Learning:
What is Deep Learning? Artificial Intelligence, Machine Learning, and Deep Learning – Artificial Intelligence, Machine
Learning - Learning representations from the data – The “deep” in deep learning – Understanding how deep learning
works, in three figures – What deep learning has achieved so far – The promise of AI

Deep Learning:​
Deep learning is a subset of machine learning that uses
multilayered neural networks, called deep neural networks, to
simulate the complex decision-making power of the human brain.
Some form of deep learning powers most of the artificial
intelligence (AI) applications in our lives today.
In a fully connected Deep neural network, there is an input layer
and one or more hidden layers connected one after the other.
Each neuron receives input from the previous layer neurons or the
input layer. The output of one neuron becomes the input to other
neurons in the next layer of the network, and this process
continues until the final layer produces the output of the network. The layers of the neural network transform the
input data through a series of nonlinear transformations, allowing the network to learn complex representations of
the input data.

Artificial neural networks:


Artificial neural networks are built on the principles of the structure
and operation of human neurons. It is also known as neural
networks or neural nets. An artificial neural network’s input layer,
which is the first layer, receives input from external sources and
passes it on to the hidden layer, which is the second layer. Each
neuron in the hidden layer gets information from the neurons in the
previous layer, computes the weighted total, and then transfers it to
the neurons in the next layer. These connections are weighted,
which means that the impacts of the inputs from the preceding layer
are more or less optimized by giving each input a distinct weight.
These weights are then adjusted during the training process to
enhance the performance of the model.

In a fully connected artificial neural network, there is an input layer and one or more hidden layers connected one
after the other. Each neuron receives input from the previous layer neurons or the input layer. The output of one
neuron becomes the input to other neurons in the next layer of the network, and this process continues until the
final layer produces the output of the network. Then, after passing through one or more hidden layers, this data is
transformed into valuable data for the output layer. Finally, the output layer provides an output in the form of an
artificial neural network’s response to the data that comes in.

Difference between Machine Learning and Deep Learning:


Machine Learning Deep Learning
Apply statistical algorithms to learn the hidden patterns Uses artificial neural network architecture to learn
and relationships in the dataset. the hidden patterns and relationships in the dataset.
Can work on the smaller amount of dataset Requires the larger volume of dataset compared to
machine learning
Better for the low-label task. Better for complex task like image processing, natural
language processing, etc.
Takes less time to train the model. Takes more time to train the model.
A model is created by relevant features which are Relevant features are automatically extracted from
manually extracted from images to detect an object in the images. It is an end-to-end learning process.
image.
Less complex and easy to interpret the result. More complex, it works like the black box
interpretations of the result are not easy.
It can work on the CPU or requires less computing power It requires a high-performance computer with GPU.
as compared to deep learning.

Types of neural networks:


Deep Learning models are able to automatically learn features from the data, which makes them well-suited for tasks
such as image recognition, speech recognition, and natural language processing. The most widely used architectures
in deep learning are feedforward neural networks, convolutional neural networks (CNNs), and recurrent neural
networks (RNNs).

1.​ Feedforward neural networks (FNNs) are the simplest type of ANN, with a linear flow of information through
the network. FNNs have been widely used for tasks such as image classification, speech recognition, and
natural language processing.
2.​ Convolutional Neural Networks (CNNs) are specifically for image and video recognition tasks. CNNs are able
to automatically learn features from the images, which makes them well-suited for tasks such as image
classification, object detection, and image segmentation.
3.​ Recurrent Neural Networks (RNNs) are a type of neural network that is able to process sequential data, such
as time series and natural language. RNNs are able to maintain an internal state that captures information
about the previous inputs, which makes them well-suited for tasks such as speech recognition, natural
language processing, and language translation.

Artificial Intelligence:
Artificial intelligence (AI) is a branch of computer science focused on creating intelligent machines capable of
mimicking human cognitive functions like learning and problem-solving. It’s a broad field encompassing various
techniques, but here’s a breakdown to get you started:
Core Goal of Artifical Intelligence (AI) is to emulate human intelligence in machines. This can involve tasks like:
​ Reasoning: Analyze information and draw logical conclusions.
​ Learning: Acquire new knowledge and skills from data.
​ Problem-solving: Identify and solve problems in a goal-oriented way.
​ Decision-making: Evaluate options and make choices based on available information.

Core Concepts in AI
Artificial Intelligence (AI) operates on a core set of concepts and technologies that enable machines to perform tasks
that typically require human intelligence. Here are some foundational concepts:
1.​ Machine Learning (ML): This is the backbone of AI, where algorithms learn from data without being explicitly
programmed. It involves training an algorithm on a data set, allowing it to improve over time and make
predictions or decisions based on new data.
2.​ Neural Networks: Inspired by the human brain, these are networks of algorithms that mimic the way neurons
interact, allowing computers to recognize patterns and solve common problems in the fields of AI, machine
learning, and deep learning.
3.​ Deep Learning: A subset of ML, deep learning uses complex neural networks with many layers (hence
“deep”) to analyze various factors of data. This is instrumental in tasks like image and speech recognition.
4.​ Natural Language Processing (NLP): NLP involves programming computers to process and analyze large
amounts of natural language data, enabling interactions between computers and humans using natural
language.
5.​ Robotics: While often associated with AI, robotics merges AI concepts with physical components to create
machines capable of performing a variety of tasks, from assembly lines to complex surgeries.
6.​ Expert Systems: These are AI systems that emulate the decision-making ability of a human expert, applying
reasoning capabilities to reach conclusions.
Each of these concepts helps to build systems that can automate, enhance, and sometimes outperform human
capabilities in specific tasks.

Machine Learning:
A subset of artificial intelligence known as machine learning focuses primarily on the creation of algorithms that
enable a computer to independently learn from data and previous experiences. Arthur Samuel first used the term
"machine learning" in 1959. It could be summarized as follows:
Without being explicitly programmed, machine learning enables a machine to automatically learn from data, improve
performance from experiences, and predict things.
Machine learning algorithms create a mathematical model that, without being explicitly programmed, aids in making
predictions or decisions with the assistance of sample historical data, or training data. For the purpose of developing
predictive models, machine learning brings together statistics and computer science. Algorithms that learn from
historical data are either constructed or utilized in machine learning. The performance will rise in proportion to the
quantity of information we provide.

Classification of Machine Learning


At a broad level, machine learning can be classified into three types:
1.​ Supervised learning
2.​ Unsupervised learning
3.​ Reinforcement learning

Learning Representations from Data:


In deep learning, representation learning is often achieved through the use of multiple layers of non-linear
transformations, which can learn complex representations of the data. These representations can then be used as
input to a final task-specific learning algorithm.
Learning representations from data, also known as representation learning, is a concept in machine learning where
the model automatically discovers the most useful features or representations of the data for the task at hand.
Instead of manually engineering features, the model learns to transform raw data into a form that makes it easier to
extract patterns and make predictions.

Types of Representations:
​ Dense Representations: These are low-dimensional vectors that capture the most important information
about the data. Word embeddings in NLP are a classic example.
​ Sparse Representations: These involve higher-dimensional vectors where most of the components are zero.
They are used in cases where certain features are only relevant in specific contexts.

Representation learning is fundamental to deep learning and is crucial for enabling models to handle complex,
high-dimensional data effectively.

You might also like