0% found this document useful (0 votes)
4 views65 pages

Unit-1 Intro Soft Computing

Uploaded by

motiramanimamta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views65 pages

Unit-1 Intro Soft Computing

Uploaded by

motiramanimamta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Introduction

On
Soft Computing

By Dr. Diya Vadhwani


Content
• Introduction
• What is Soft Computing?
• Soft Computing Tools
• Importance
• Future of Soft Computing
• Hard Computing Vs Soft Computing
• Conclusion
INTRODUCTION

 More complex systems arising in biology, medicine, the


humanities, management sciences.

 Similar fields often remained intractable to conventional


mathematical and analytical methods

 Soft computing deals with imprecision, uncertainty, partial


truth, and approximation to achieve tractability, robustness
and low solution cost.
What is Soft Computing?

• It consists of distinct concepts and techniques which aim to


overcome the difficulties encountered in real world
problems.

• These problems result from the fact that our world seems to
be imprecise, uncertain and difficult to categorize.
• Soft computing is the reverse of hard
(conventional) computing. It refers t o a
group o f computational techniques that are
based on artificial intelligence (AI) and
natural selection.
Soft • It provides cost-effective solutions t o the
complex real-life problems for which hard
Computing computing solution does not exist.
• Zadeh coined the term of soft computing in
1992. The objective of soft computing is t o
provide precise approximation and quick
solutions for complex real- life problems.
Soft Computing
• In simple terms, you can understand soft
computing - an emerging approach that gives
the amazing ability of the human mind.
• It can map a human mind and the human mind is
a role model for soft computing.
• Note: Basically, soft computing is different from
traditional/conventional computing and it deals
with approximation models.
The tools for soft computing
• Fuzzy logic models
• Neural networks
• Genetic algorithms
• Machine Learning
• Probabilistic Reasoning
Soft Computing
Soft Computing
• Soft computing provides an approach to
problem-solving using means other than
computers.
• With the human mind as a role model, soft
computing is tolerant of partial truths,
uncertainty, imprecision and approximation,
unlike traditional computing models.
• The tolerance of soft computing allows
researchers to approach some problems that
traditional computing can't process.
Soft Computing
• Fuzzy logic
• Machine learning
• Probabilistic reasoning
• Evolutionary computation
• Perceptron
• Genetic algorithms
• Differential algorithms
• Support vector machines
Soft Computing
• Metaheuristics
• Swarm intelligence
• Ant colony optimization
• Particle optimization
• Bayesian networks
• Artificial neural networks
• Expert systems
Fuzzy Logic
• Fuzzy logic is nothing but mathematical logic
which tries to solve problems with an open and
imprecise spectrum of data.
• It makes it easy to obtain an array of precise
conclusions.
• Fuzzy logic is basically designed to achieve the
best possible solution to complex problems from
all the available information and input data.
• Fuzzy logics are considered as the best solution
finders.
Fuzzy Logic
Fuzzy logic models
Machine Learning
• Machine learning is an application of artificial intelligence (AI)
that provides systems the ability to automatically learn and
improve from experience without being explicitly programmed.
• Machine learning focuses on the development of computer
programs that can access data and use it learn for themselves.
• The process of learning begins with observations or data, such
as examples, direct experience, or instruction, in order to look
for patterns in data and make better decisions in the future
based on the examples that we provide.
• The primary aim is to allow the computers learn automatically
without human intervention or assistance and adjust actions
accordingly.
Origins of Machine Learning
• The earliest databases recorded information from the
observable environment.
• Astronomers recorded patterns of planets and stars;
biologists noted results from experiments
crossbreeding plants and animals; and cities recorded
tax payments, disease outbreaks, and populations.
• Each of these required a human being to first observe
and second, record the observation.
• Today, such observations are increasingly automated
and recorded systematically in ever-growing
computerized databases.
Machine Learning
Machine Learning
• The field of study interested in the development of
computer algorithms for transforming data into
intelligent action is known as machine learning.
Probabilistic Reasoning
• Probabilistic reasoning is a way of knowledge
representation where we apply the concept of
probability to indicate the uncertainty in
knowledge.
• In probabilistic reasoning, we combine probability
theory with logic to handle the uncertainty.
• We use probability in probabilistic reasoning
because it provides a way to handle the
uncertainty that is the result of someone's
laziness and ignorance.
Probabilistic Reasoning
Probabilistic Reasoning
• In the real world, there are lots of scenarios, where the
certainty of something is not confirmed, such as "It will rain
today," "behavior of someone for some situations," "A
match between two teams or two players." These are
probable sentences for which we can assume that it will
happen but not sure about it, so here we use probabilistic
reasoning.
• Need of probabilistic reasoning in AI:
– When there are unpredictable outcomes.
– When specifications or possibilities of predicates
becomes too large to handle.
– When an unknown error occurs during an experiment.
Evolutionary computation

• Evolutionary computation is a family of algorithms
for global optimization inspired by biological
evolution, and the subfield of artificial intelligence
•and soft computing studying these algorithms.
• In technical terms, they are a family of population-
based trial and error problem solvers with a
metaheuristic or stochastic optimization character.
Evolutionary computation

• In evolutionary computation, an initial set of candidate
solutions is generated and iteratively updated.

• Each new generation is produced by stochastically
removing less desired solutions, and introducing small
random changes.

• In biological terminology, a population of solutions is
subjected to natural selection (or artificial selection) and
mutation.

• As a result, the population will gradually evolve to
increase in fitness, in this case the chosen fitness
function of the algorithm.
Evolutionary computation
Human Nervous System
Human Nervous System
• Human nervous system consists of billions of neurons. These
neurons collectively process input received from sensory
organs, process the information, and decides what to do in
reaction to the input.
• A typical neuron in the human nervous system has three
main parts: dendrites, nucleus, and axons.
– The information passed to a neuron is received by
dendrites.
– The nucleus is responsible for processing this information.
– The output of a neuron is passed to other neurons via the
axon, which is connected to the dendrites of other
neurons further down the network.
Perceptron

• Artificial neural networks are inspired by the
human neural network architecture. The simplest
neural network consists of only one neuron and is
called a perceptron, as shown in the figure below:
How Perceptron works?
• A perceptron has one input layer and one neuron. Input layer acts as
the dendrites and is responsible for receiving the inputs.
• The number of nodes in the input layer is equal to the number of
features in the input dataset. Each input is multiplied with a weight
(which is typically initialized with some random value) and the
results are added together.
• The sum is then passed through an activation function. The
activation function of a perceptron resembles the nucleus of human
nervous system neuron.
• It processes the information and yields an output. In the case of a
perceptron, this output is the final outcome. However, in the case of
multilayer perceptrons, the output from the neurons in the previous
layer serves as the input to the neurons of the proceeding layer.
Artificial Neural Network
• A multilayer perceptrons, is commonly known as artificial
neural networks.
• A single layer perceptron can solve simple problems where
data is linearly separable in to 'n' dimensions, where 'n' is
the number of features in the dataset. However, in case of
non-linearly separable data, the accuracy of single layer
perceptron decreases significantly.
• Multilayer perceptrons, on the other hand, can work
efficiently with non-linearly separable data.
• Artificial neural networks, are a combination of multiple
neurons connected in the form a network. It has an input
layer, one or more hidden layers, and an output layer.
Neural networks
Artificial Neural Network
Genetic Algorithm

• The genetic algorithm is a method for solving both
constrained and unconstrained optimization
problems that is based on natural selection, the
•process that drives biological evolution.
• The genetic algorithm repeatedly modifies a
•population of individual solutions.
• At each step, the genetic algorithm selects
individuals from the current population to be
parents and uses them to produce the children for
the next generation.
Genetic Algorithm
• Over successive generations, the population
"evolves" toward an optimal solution.
• You can apply the genetic algorithm to solve a
variety of optimization problems that are not well
suited for standard optimization algorithms,
including problems in which the objective function
is discontinuous, nondifferentiable, stochastic, or
highly nonlinear.
• The genetic algorithm can address problems of
mixed integer programming, where some
components are restricted to be integer-valued.
Genetic algorithms
Genetic Algorithm
Genetic Algorithm

• The genetic algorithm uses three main types of rules at
each step to create the next generation from the current
population:

• Selection rules select the individuals, called parents, that
contribute to the population at the next generation. The
selection is generally stochastic, and can depend on the
individuals' scores.

• Crossover rules combine two parents to form children for
the next generation.

• Mutation rules apply random changes to individual
parents to form children.
Genetic Algorithm
Genetic Algorithm
Differential Evolution
• In evolutionary computation, differential evolution
(DE) is a method that optimizes a problem by
iteratively trying to improve a candidate solution
with regard to a given measure of quality.
• Such methods are commonly known as
metaheuristics as they make few or no assumptions
about the problem being optimized and can search
very large spaces of candidate solutions.
• However, metaheuristics such as DE do not
guarantee an optimal solution is ever found.
What is support vector?
• “Support Vector Machine” (SVM) is a supervised
machine learning algorithm which can be used for
both classification or regression challenges.
However, it is mostly used in classification problems.
• In this algorithm, we plot each data item as a point in
n-dimensional space (where n is number of features
you have) with the value of each feature being the
value of a particular coordinate.
• Then, we perform classification by finding the
hyperplane that differentiate the two classes very
well.
Support Vector Machine
• Generally, Support Vector Machines is considered to be a
classification approach, it but can be employed in both
types of classification and regression problems.
• It can easily handle multiple continuous and categorical
variables.
• SVM constructs a hyperplane in multidimensional space to
separate different classes. SVM generates optimal
hyperplane in an iterative manner, which is used to
minimize an error.
• The core idea of SVM is to find a maximum marginal
hyperplane(MMH) that best divides the dataset into
classes.
Decision Vectors
Swarm Intelligence
• Swarm intelligence (SI) is the collective behavior
of decentralized, self-organized systems,
natural or artificial.
• The concept is employed in work on artificial
intelligence. The expression was introduced by
Gerardo Beni and Jing Wang in 1989, in the
context of cellular robotic systems
Swarm Intelligence
• SI systems consist typically of a population of simple agents or
boids interacting locally with one another and with their
environment.
• The inspiration often comes from nature, especially biological
systems.
• The agents follow very simple rules, and although there is no
centralized control structure dictating how individual agents
should behave, local, and to a certain degree random, interactions
between such agents lead to the emergence of "intelligent" global
behavior, unknown to the individual agents.
• Examples of swarm intelligence in natural systems include ant
colonies, bee colonies, bird flocking, hawks hunting, animal
herding, bacterial growth, fish schooling and microbial intelligence
Swarm Intelligence
Ant Colony Optimization
• Ant colony optimization algorithm (ACO) is a
probabilistic technique for solving computational
problems which can be reduced to finding good paths
through graphs.
• Artificial ants stand for multi-agent methods inspired by
the behavior of real ants. The pheromone-based
communication of biological ants is often the
predominant paradigm used.
• Combinations of artificial ants and local search
algorithms have become a method of choice for
numerous optimization tasks involving some sort of
graph, e.g., vehicle routing and internet routing.
Ant Colony Optimization
• As an example, ant colony optimization is a class
of optimization algorithms modeled on the
actions of an ant colony.
• Artificial 'ants' (e.g. simulation agents) locate
optimal solutions by moving through a
parameter space representing all possible
solutions.
• Real ants lay down pheromones directing each
other to resources while exploring their
environment.
Ant Colony Optimization

Image: Wikipedia
Particle Swarm Optimization
• Particle swarm optimization (PSO) is a computational method
that optimizes a problem by iteratively trying to improve a
candidate solution with regard to a given measure of quality.
• It solves a problem by having a population of candidate
solutions, here dubbed particles, and moving these particles
around in the search-space according to simple mathematical
formula over the particle's position and velocity.
• Each particle's movement is influenced by its local best known
position, but is also guided toward the best known positions in
the search-space, which are updated as better positions are
found by other particles.
• This is expected to move the swarm toward the best solutions.
Particle Swarm Optimization

Image: Research Gate


Bayesian Belief Networks
• Bayesian belief network is key computer technology for
dealing with probabilistic events and to solve a problem
which has uncertainty. We can define a Bayesian network as:
– "A Bayesian network is a probabilistic graphical model
which represents a set of variables and their conditional
dependencies using a directed acyclic graph."
• It is also called a Bayes network, belief network, decision
network, or Bayesian model.
• Bayesian networks are probabilistic, because these networks
are built from a probability distribution, and also use
probability theory for prediction and anomaly detection.
Bayesian Belief Networks
• Real world applications are probabilistic in nature, and
to represent the relationship between multiple events,
we need a Bayesian network.
• It can also be used in various tasks including prediction,
anomaly detection, diagnostics, automated insight,
reasoning, time series prediction, and decision making
under uncertainty.
• Bayesian Network can be used for building models from
data and experts opinions, and it consists of two parts:
– Directed Acyclic Graph
– Table of conditional probabilities.
Bayesian Belief Networks

• The generalized form of Bayesian network that represents
and solve decision problems under uncertain knowledge is
known as an Influence diagram.

• A Bayesian network graph is made up of nodes and Arcs
(directed links), where:
Expert Systems
• An expert system is a computer program that is
designed to solve complex problems and to provide
decision-making ability like a human expert.
• It performs this by extracting knowledge from its
knowledge base using the reasoning and inference
rules according to the user queries.
Expert Systems

• The expert system is a part of AI, and the first ESwas
developed in the year 1970, which was the first successful
• approach of artificial intelligence.
• It solves the most complex issue as an expert by extracting the
knowledge stored in its knowledge base. The system helps in
decision making for compsex problems using both facts and
• heuristics like a human expert.
• It is called so because it contains the expert knowledge of a
specific domain and can solve any complex problem of that
• particular domain.
• These systems are designed for a specific domain, such as
medicine, science, etc.
Expert Systems
Expert Systems
Machine Learning
Probabilistic Reasoning
Importance of Soft Computing

• The conceptual structure of soft computing suggests that


students should be trained not just in fuzzy logic,
neurocomputing, genetic programming, or probabilistic
reasoning but in all of the associated methodologies, though
not necessarily to the same degree.
FUTURE OF SOFT COMPUTING

• Soft computing represents a significant paradigm shift in the


aims of computing.

• A shift which reflects the fact that the human mind, unlike
present day computers, possesses a remarkable ability to
store and process information which is pervasively
imprecise,uncertain and lacking in categoricity.
Hard Computing Vs Soft Computing

• Hard computing based on binary logic, crisp systems,


numerical analysis and crisp software.

• Soft computing based on fuzzy logic, neural nets and


probabilistic reasoning.

• Hard computing requires programs to be written.

• Soft computing can evolve its own programs


Conclusion
• What is particularly significant is that in both consumer
products and industrial systems, the employment of soft
computing techniques leads to systems which have high
MIQ (Machine Intelligence Quotient).

• The successful applications of soft computing suggest that


the impact of soft computing will be felt increasingly in
coming years.
Conclusion

• Soft computing is an emerging approach to
computing which parallel the remarkable ability of
the human mind to reason and learn in an
•environment of uncertainty and imprecision.
• Soft computing is based on some biological inspired
methodologies such as genetics, evolution, ant’s
behaviors, particles swarming, human nervous
systems, etc
Thanks

You might also like