Introduction
What is Intelligence?
The Ability to:
Learn,
Understand,
Reason,
Plan,
Solve Problems, and
Adapt to New Situations.
Source: Google Images
Introduction
What is Artificial Intelligence?
An attempt to bring Intelligence into Machines
through:
Programming,
Information Exchange and
Interactions
Source: Google Images
Introduction
Why Intelligence is difficult to implement
artificially ?
“Because it is unexplainable”
Example: Example:
Who is he? Who is he?
Source: Google Images
Introduction
What is Learning?
“The ability to retain
knowledge which is gained
through experience”
Introduction
What is Machine Learning?
“A type of AI that enables self
learning through data and
interactions without human
intervention”
Introduction
Types of Machine Learning
• Supervised (inductive) Learning
– Training data includes desired outputs
• Unsupervised Learning
– Training data does not include desired outputs
• Semi-supervised Learning
– Training data includes a few desired outputs
• Reinforcement Learning
– Rewards from the sequence of actions
Introduction
Supervised Learning
Source: Google Images
Introduction
Unsupervised Learning
Source: Google Images
Introduction
Semi-supervised Learning
Source: Google Images
Introduction
Reinforcement Learning
Source: Google Images
Introduction
Source: Google Images
Introduction
AI Techniques: NLP
“NLP is the type of artificial intelligence
which analyze text and speech in a way
that's similar to how humans do”
Source: Google Images
Introduction
AI Techniques: Computer Vision
“Computer vision is a field of AI that
teaches computers to understand and
interpret visual information”
Source: Google Images
Introduction
AI Techniques: Cognitive Computing
“Cognitive computing is a type of AI that
uses machines to simulate human
thought processes”
Source: Google Images
Introduction
AI Techniques: Deep Learning
“Deep learning is a type of AI that uses
artificial neural networks to teach
computers how to process data in a way
that resembles the human brain”
Source: Google Images
Introduction
AI Techniques: Machine Learning
“A type of AI that enables self learning
through data and interactions without
human intervention”
Source: Google Images
Introduction
Fig: Neural Network as a subfield of Artificial Intelligence
Source: Google Images
Introduction
Brief Evolution of Neural Networks:
Neural Network was introduced in the 1940s.
How to train it remained a mystery for 20 years.
The concept of backpropagation came in the 1960s.
Neural Networks got attention somewhere around 2010.
Neural Networks have used for image captioning,
language translation, audio and video synthesis, and
more.
In addition, more challenging problems like self-driving
cars, calculating risk, detecting fraud, and early cancer
detections etc. became feasible.
Introduction
Biological Neuron The most basic
information-processing unit in the
human brain.
Fig: A biological neuron. Source: Neural Networks,'' by Simon Haykin
Introduction
What is Artificial Neuron?
Fig: Comparing a biological neuron to an artificial neuron.
Source: Google Images
Introduction
Neural Network
A Neural Network is a parallel distributed system made up of
simple processing units, known as neurons, which has a natural
tendency of storing experiential knowledge and making it
available for use.
It resembles the brain in two respects:
1. Knowledge is acquired through the learning process.
2. Inter-neuron connection weights, are used to store the
acquired knowledge.
Source: Neural Networks,'' by Simon Haykin
Introduction
Example Neural Networks
Fig : Example basic neural network.
Source: Google Images
Introduction
Example Neural Networks
Fig : Visual depiction of passing image data through a neural network, getting a classification
Source: Google Images
Introduction
Example Neural Networks
https://www.youtube.com/watch?v=fXSRfzhHPm0
Artificial Neuron Model
Model of an artificial neuron
Source: Google Images
Artificial Neuron Model
Simplest model of an artificial neuron
Fig: Graph of a single-input neuron’s output with a weight of 1, bias of 0 and input x
Source: Google Images
Artificial Neuron Model
Simplest model of an artificial neuron
Fig: Graph of a single-input neuron’s output with a given weight, bias and input x
Source: Google Images
Artificial Neuron Model
Simplest model of an artificial neuron
Fig: Graph of a single-input neuron’s output with a given weight, bias and input x
Source: Google Images
Artificial Neuron Model
Simplest model of an artificial neuron
Fig: Graph of a single-input neuron’s output with a given weight, bias and input x
Source: Google Images
Artificial Neuron Model
Simplest model of an artificial neuron
Fig: Graph of a single-input neuron’s output with a given weight, bias and input x
Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: AND Gate
Fig: Two Input AND Gate
Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: OR Gate
Fig: Two Input OR Gate
Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: NAND Gate
Fig: Two Input NAND
Gate Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: NOR Gate
Fig: Two Input NOR Gate
Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: XOR Gate
Fig: Two Input XOR Gate
Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: XOR Gate
Fig: Two Input XOR Gate using NAND, OR and AND gates
Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: XNOR Gate
Fig: Two Input XNOR Gate
Source: Google Images
Artificial Neuron Model
Implementing Simple Logic Circuits: XNOR Gate
OUTPUT
Fig: Two Input XNOR Gate using NOR, AND and OR gates
Source: Google Images
Artificial Neuron Model
Model of an artificial neuron
Source: Google Images
Activation Functions
Model of an artificial neuron
x1
x2
yk
x3 Σ f(.)
xn
bk
Fig: Typical Neuron Model having Activation Function
Source: Google Images
Activation Functions
About Activation Functions:
The Activation Functions are applied to the output
of a neuron.
Activation Function modifies the neuron output.
We use Activation Functions to introduce
nonlinearity or desired mapping in the model.
The neural networks use the activation functions in
hidden layers and in the output layer.
Activation Functions
Why Use Activation Functions?
What is a nonlinear function?
A nonlinear function cannot be represented well by a
straight line, such as a sine function:
Fig: Graph of y=sin(x) Source: Google Images
Activation Functions
The Linear Activation Function
This activation function is usually
applied to the last layer’s output in the
case of a regression model.
Fig: Linear function graph. Source: Google Images
Activation Functions
The Step Activation Function:
A neuron “firing” or “not firing”
This activation function has been used
historically in hidden layers, but
nowadays, it is rarely a choice.
Fig: Step function graph. Source: Google Images
Activation Functions
The Sigmoid Activation Function
The problem with the step function
is it’s not very informative. It’s hard
to tell how “close” this step function
was to activating or deactivating.
The output from the Sigmoid function,
being in the range of 0 to 1, the returned
value contains all the information from the
input. The Sigmoid function, historically
used in hidden layers, was eventually
replaced by the Rectified Linear Units
activation function or ReLU.
Fig: Sigmoid function graph. Source: Google Images
Activation Functions
The Rectified Linear Unit (ReLU) Activation Function
The rectified linear unit activation
function is simpler than the
sigmoid. This simple yet powerful
activation function is the most
widely used activation function,
mainly due to speed and efficiency.
Fig: Graph of the ReLU activation function
Source: Google Images
Activation Functions
The Leaky Rectified Linear Activation Function
Fig: Graph of the Leaky ReLU activation function
Source: Google Images
Activation Functions
The SoftMax Activation Function
For a classification problem, the SoftMax activation
function is used.
In case of classification, we want to see that for
which class the input represents.
SoftMax activation function represents confidence
scores for each class and will add up to 1.
The predicted class is associated with the output
neuron that returned the largest confidence score.
Activation Functions
ReLU Activation with a single Neuron
Fig: Single neuron with single input (zeroed weight) and ReLU activation function
Activation Functions
ReLU Activation with a single Neuron
Fig: Single neuron with single input and ReLU activation function, weight set to 1.0.
Source: Google Images
Activation Functions
ReLU Activation with a single Neuron
Fig: Single neuron with single input and ReLU activation function.
Source: Google Images
Activation Functions
ReLU Activation with a single Neuron
Fig: Single neuron with single input and ReLU activation function, bias applied.
Source: Google Images
Activation Functions
Fig: Single neuron with single input and ReLU activation function, bias applied.
Source: Google Images
Activation Functions
ReLU Activation in a Pair of Neurons
Fig: Pair of neurons with single inputs and ReLU activation functions.
Source: Google Images
Activation Functions
ReLU Activation in a Pair of Neurons
Fig: Pair of neurons with single inputs and ReLU activation functions, other bias applied.
Source: Google Images
Activation Functions
Why Use Activation Functions?
Fig : To fit sin(x) function
Fig : The simple mapping process to fit sin(x) function Source: Google Images
Activation Functions
Using the ReLU Activation Function.
Source: Google Images
Activation Functions
ReLU Activation in the Hidden Layers
Fig : The simple mapping process to fit sin(x) function Source: Google Images
Activation Functions
ReLU Activation in the Hidden Layers
Fig : Example of fitment after fully-connecting the neurons and using an optimizer.
Source: Google Images
Activation Functions
ReLU Activation in the Hidden Layers
Fig : Fitment with 2 hidden layers of 64 neurons each, fully connected, with optimizer.
Think on….
Explore the applicability of Neural Network for the
following and describe it briefly. Also write in which
field/application you want to use the Neural Networks.
1. Nonlinearity
2. Input-Output mapping
3. Adaptivity
4. Generalization
5. Evidential response
6. Fault tolerance
7. Implementability