0% found this document useful (0 votes)
8 views43 pages

Introduction To Soft Computing

Uploaded by

Himanshu Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views43 pages

Introduction To Soft Computing

Uploaded by

Himanshu Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Pre-requisite Course Codes, if any.

Mathematics

Course Objective: To implement soft computing-based solutions for solving real-


world problems
PE-IV Course Outcomes (CO): At the End of the course students will be able to
Principles of
Identify soft computing techniques and their roles in building intelligent
Soft ET424.1
Machines.
Computing
(ET424) ET424.2
Apply fuzzy logic reasoning Design model to solve various engineering
problems.

ET424.3 Analyze optimization issues using Genetic Algorithm.

Design various hybrid soft computing models by using different


ET424.4
techniques
Syllabus
(Theory
Component)
Textbooks
Sr.
Title Authors Publisher Year
No.
1 Introduction to Artificial Jacek M. Zurada PWS Publishing 1995
Neural Systems Company
2 Principles of Soft S. N. Sivanandamand Wiley Publication, 2018
Computing S. N. Deepa 3rd edition
3 Neural Networks, Fuzzy S. Rajasekaran • Prentice-Hall of 2004
Logic and Genetic India
G. A. Vijayalakshami
Algorithms
Reference Books
Sr.
Title Author Publisher Year
No.
01 Neural Networks: A Simon Haykin Macmillan College 1994
Comprehensive Foundation PublishingCompany
02 Neural Network Design Martin Hagan CENGAGE Learning, 2008
India Edition
03 Fuzzy Sets and Fuzzy Logic: George J. Klir and Prentice-Hall of India 1994
Theory and Applications Bo Yuan
Assessment
Examination Scheme (%)

Component ISE MSE ESE


Theory 13.5 13.5 40
Laboratory 20 -- 13

• ISE 1: Quiz1 (20 Marks) September 2nd Week

• ISE 2: Quiz2 (20 Marks) October 4th Week

• ISE 3: Coding Assignment (Group 2/3 activity) (40 Marks) November 1st Week
Module I

Introduction To Soft Computing


Computing

Antecedent Consequent

Computing
Y=f(x)

Control Action
Hard Computing Vs Soft Computing
• In 1996, LA Zade (LAZ) introduced the term hard computing. According to LAZ: We term
a computing as “Hard” computing, if
► Precise result is guaranteed
► Control action is unambiguous
► Control action is formally defined (i.e. with mathematical model
• According to Prof. Zadeh:
"...in contrast to traditional hard computing, soft computing exploits the tolerance for
imprecision, uncertainty, and partial truth to achieve tractability, robustness, low solution-
cost, and better rapport with reality”

Soft Computing Main Components:


• Approximate Reasoning
• Search & Optimization
 Neural Networks, Fuzzy Logic, Evolutionary Algorithms
Soft vs. Hard Computing

1. Hard Computing: Hard computing uses traditional mathematical


methods to solve problems, such as algorithms and mathematical
models. It is based on deterministic and precise calculations and is ideal
for solving problems that have well-defined mathematical solutions.

2. Soft Computing: Soft computing, on the other hand, uses techniques


such as fuzzy logic, neural networks, genetic algorithms, and other
heuristic methods to solve problems. It is based on the idea of
approximation and is ideal for solving problems that are difficult or
impossible to solve exactly.
PROBLEM SOLVING TECHNIQUES
HARD COMPUTING SOFT COMPUTING
Soft Computing Hard Computing

• Soft Computing is liberal of


• Hard computing needs an
inexactness, uncertainty, partial
exactly state analytic model.
truth and approximation.
• Soft Computing relies on formal
• Hard computing relies on binary
logic and probabilistic
logic and crisp system.
reasoning.
• Soft computing has the features • Hard computing has the features
of approximation and of exactitude(precision) and
dispositionality. categoricity.
• Soft computing is stochastic in • Hard computing is deterministic
nature. in nature.
• Soft computing works on ambiguous
• Hard computing works on exact data.
and noisy data.

• Soft computing can perform parallel • Hard computing performs sequential


computations. computations.

• Soft computing produces approximate • Hard computing produces precise


results. results.

• Soft computing will emerge its own • Hard computing requires programs to
programs. be written.

• Soft computing incorporates


• Hard computing is settled.
randomness .

• Soft computing will use multivalued


• Hard computing uses two-valued logic.
logic.
Conclusion: Soft Computing…
Soft computing is a tolerance of following
• Imprecise
• Uncertainty
• Partial truth
• Approximation
• Role model of soft computing is Human mind
CONSTITUENTS OF Soft Computing

• Fuzzy System : Reasoning and imprecision

• Neural Network :Learning

• Evolutionary Computing (Genetic Algorithm):Searching


and Optimization
Biological NEURAL NETWORKS
BRAIN COMPUTATION
ARTIFICIAL NEURAL NET
• Information-processing system.
• Neurons process the information.
• The signals are transmitted by means of connection links.
• The links possess an associated weight.
• The output signal is obtained by applying activations to
the net input.
Motivation
• Massive parallelism
• Distributed representation and computation
• Learning ability
• Generalization ability
• Adaptively
• Inherent contextual information processing
• Fault tolerance
• Low energy consumption.
ARTIFICIAL NEURAL NET
PROCESSING OF AN ARTIFICIAL NET
• The neuron is the basic information processing unit of a NN. It
consists of:
1. A set of links, describing the neuron inputs, with weights W1, W2,…, Wm.

2. An adder function (linear combiner) for computing the weightedsum of the inputs (real
numbers):

3. Activation function for limiting the amplitude of the neuron output.

y =(u + b)
BIAS OF AN ARTIFICIAL NEURON

• The bias value is added to the weighted sum


∑wixi so that we can transform it from the origin.
Yin = ∑wixi + b, where b is the bias
x1-x2= -1

x1-x2= 1
OPERATION OF A NEURAL NET
OPERATION OF A NEURAL NET
Learning by trial‐and‐error

Continuous process of:


➢Trial:
Processing an input to produce an output (In terms of ANN: Compute
the output function of a given input)
➢Evaluate:
Evaluating this output by comparing the actual output with
the expected output.
➢Adjust:
Adjust the weights.
How it works?
 Set initial values of the weights randomly.
 Input: truth table of the XOR
 Do
▪ Read input (e.g. 0, and 0)
▪ Compute an output (e.g. 0.60543)
▪ Compare it to the expected output. (Diff= 0.60543)
▪ Modify the weights accordingly.
 Loop until a condition is met
▪ Condition: certain number of iterations
▪ Condition: error threshold
Design Issues
 Initial weights (small random values ∈[‐1,1])
 Transfer function (How the inputs and the weights are
combined to produce output?)
 Error estimation
 Weights adjusting
 Number of neurons
 Data representation
 Size of training set
Learning Methods
• Supervised Learning - Providing the network with a series of sample inputs
and comparing the output with the expected responses.
• Unsupervised Learning - Most similar input vector is assigned to the same
output unit.
• Reinforcement Learning - Right answer is not provided but indication of
whether ‘right’ or ‘wrong’ is provided
• Online Learning -The adjustment of the weight and threshold is made after
presenting each training sample to the network.
• Offline Learning - The adjustment of the weight vector and threshold is
made only after all the training set is presented to the network. It is also
called Batch Learning.
Supervised learning

 This is what we have seen so far!

 A network is fed with a set of training samples (inputs and


corresponding output), and it uses these samples to learn
the general relationship between the inputs and the outputs.

 This relationship is represented by the values of the weights


of the trained network.
• Supervised Learning:

Expected
Inputs from the Output
environment Actual System

Actual +
Output

Neural Network Σ
-

Error
Unsupervised learning

 No desired output is associated with the training data!

 Faster than supervised learning

 Used to find out structures within data:


 Clustering
 Compression
Reinforcement learning

 Like supervised learning, but:

 Weights adjusting is not directly related to the error


value.

 The error value is used to randomly, shuffle weights!

 Relatively slow learning due to ‘randomness’.


WEIGHT AND BIAS UPDATION
• Per Sample Updating

• updating weights and biases after the presentation of each sample.

• Per Training Set Updating (Epoch or Iteration)

• weight and bias increments could be accumulated in variables and


the weights and biases updated after all the samples of the training
set have been presented.
STOPPING CONDITION
• All change in weights (Δwij) in the previous epoch are below some
threshold, or

• The percentage of samples misclassified in the previous epoch is


below some threshold, or

• A pre-specified number of epochs has expired.

• In practice, several hundreds of thousands of epochs may be required


before the weights will converge.
Types of ANN
ACTIVATION FUNCTION

• ACTIVATION LEVEL – DISCRETE OR CONTINUOUS

• HARD LIMIT FUCNTION (DISCRETE)


• Binary Activation function
• Bipolar activation function
• Identity function

• SIGMOIDAL ACTIVATION FUNCTION (CONTINUOUS)


• Binary Sigmoidal activation function
• Bipolar Sigmoidal activation function
ACTIVATION
FUNCTION
CONSTRUCTING ANN
• Determine the network properties:
• Network topology
• Types of connectivity
• Order of connections
• Weight range
• Determine the node properties:
• Activation range
• Determine the system dynamics
• Weight initialization scheme
• Activation – calculating formula
• Learning rule
PROBLEM SOLVING
• Select a suitable NN model based on the nature of the problem.

• Construct a NN according to the characteristics of the application


domain.

• Train the neural network with the learning procedure of the


selected model.

• Use the trained network for making inference or solving problems.


SALIENT FEATURES OF ANN
• Adaptive learning
• Self-organization
• Real-time operation
• Fault tolerance via redundant information coding
• Massive parallelism
• Learning and generalizing ability
• Distributed representation
Applications Areas
 Function approximation
 including time series prediction and modelling.
 Classification
 including patterns and sequences recognition, novelty
detection and sequential decision making.
 (radar systems, face identification, handwritten text recognition)

 Data processing
 including filtering, clustering blinds source separation and
compression.
 (data mining, e-mail Spam filtering)

You might also like