0% found this document useful (0 votes)
17 views14 pages

Lec1 Introduction

The document discusses the principles of neural networks and machine learning, comparing them to human brain functions such as perception and motor control. It outlines the structure of artificial neurons, activation functions, and learning algorithms, emphasizing the importance of feedback in recurrent networks. Additionally, it describes different network architectures, including single-layer, multilayer, and recurrent networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views14 pages

Lec1 Introduction

The document discusses the principles of neural networks and machine learning, comparing them to human brain functions such as perception and motor control. It outlines the structure of artificial neurons, activation functions, and learning algorithms, emphasizing the importance of feedback in recurrent networks. Additionally, it describes different network architectures, including single-layer, multilayer, and recurrent networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Neural Network and Machine Learning Sept.

21/2021

Introduction
Human Brain Computing

complex nonlinear parallel

Pattern recognition

Brain Computations: Perception

Motor control

Can we build an artificial brain ??


Propagation of AP along axon segment
Neural cell

Action potential
nervous system

Neurons are 5 to 6 orders of magnitude slower than silicon logic gates

nanoseconds Silicon chips


Events happen in
milliseconds neurons

neuron model

Dendrite : receives signal


Cell body : synthesizes incoming signal (nonlinear)
Axon : transmit signal

Synapse : transmits weighted signals to other cells


Artificial neuron model

Neural Network NN : is a massively parallel distributed processor made up of simple


processing units (neurons) that can store experimental knowledge and making it
available for use.

Learning Algorithm : is the procedure used to perform the learning process =


modify synaptic weights to attain a desired design objective.

Traditional learning process = Linear adaptive filter theory


Affine transformation produced by the presence of a bias
Types of Activation Function

1. Threshold Function. For this type of activation function, we have

In engineering, this form of a threshold function is commonly referred


to as a Heaviside function.

In neural computation, such a neuron is referred to as the McCulloch–Pitts mode

2. Sigmoid Function. The sigmoid function, whose graph is “S”-shaped,


is by far the most common form of activation function used in the
construction of neural networks. It is defined as a strictly increasing
function that exhibits a graceful balance between linear and nonlinear
behavior. An example of the sigmoid function is the logistic function
Now if the output range is [-1,1] not [0,1]:

Threshold Function is now defined as:

which is commonly referred to as the signum function

Sigmoid Function is now defined as: Of practical importance

signal-flow graphs of NN:

Synaptic link

Activation link

• The weighted sum of inputs defines the induced


local field.
• The activation link squashes the induced local
field to produce an output.
Feedback
It plays a major role in the study of a special class of neural networks known as recurrent networks

Single loop feedback system

closed-loop operator of the system

AB as the open-loop operator


AB is noncommutative
Example-1

Using the binomial expansion for

The dynamic behavior of a feedback system represented by the signal-flow graph is controlled by the weight
(a) Stable.

(b) Linear divergence.

(c) Exponential divergence.


Network architectures (structures):

1-Single-Layer Feedforward Networks 2-Multilayer Feedforward Networks 3-Recurrent Networks

By adding one or more hidden layers, the network is The presence of feedback loops
enabled to extract higher-order statistics from its input.

You might also like