UNIT - I
EVOLUTIONARY COMPUTING
Q) Explain the concept of problem solving as a search task with a suitable real-world
example.
In evolutionary computing, problem solving as a search task refers to viewing the process of
finding a solution as exploring a search space of possible solutions to identify the one that
best meets the desired objective. A search algorithm takes the problem as input, represents
possible solutions (called individuals or candidate solutions), evaluates their quality using an
evaluation function, and moves towards better solutions through a sequence of steps.
In this process, solutions can be global optima (best in the entire search space) or local
optima (best in a limited neighbourhood). Effective search balances exploitation of the best
solutions found so far with exploration of new areas in the search space.
Under the evolutionary perspective, a problem is seen as a collection of information from
which knowledge or an optimal outcome can be extracted. Examples include:
• Function maximization – e.g., for f(x)=x^3+x+3, find the value of x that gives the
maximum output.
• Timetabling problem – assigning classes to students while considering constraints like
room capacity, teacher availability, and avoiding time clashes.
Problem-solving is the process of taking actions or sequences of actions that lead to improved
performance. In computational terms, this process is called search. A search algorithm takes
the problem as input and returns a solution by exploring possible candidate solutions (also
called individuals or agents) and evaluating their quality. Even if exact performance
knowledge is unavailable, relative comparisons between candidates can guide the search.
The first stage is problem formulation, which depends on available information. It involves:
1. Choice of representation – deciding how to encode each candidate solution, which
defines the search space and its size.
2. Specification of the objective – clearly stating the goal, such as maximizing or
minimizing a given function.
3. Definition of an evaluation function – providing a way to measure the quality of
each solution, even if only relatively.
➢ Two key solution types are:
• Global optimum – the best solution in the entire search space.
• Local optimum – the best solution only within a small neighbourhood.
➢ Effective search requires balancing:
• Exploitation – improving the best solutions found so far.
• Exploration – examining new regions of the search space.
➢ Different techniques have different strengths:
• Hill climbing focuses on exploitation but risks getting stuck in local optima.
• Simulated annealing and genetic algorithms combine exploration and
exploitation.
Q) Compare hill climbing and simulated annealing in terms of working principle,
advantages, and limitations.
Hill Climbing
Hill climbing is a local search method that uses an iterative improvement strategy.
It operates on a single point (current state) in the search space and moves to a better
neighbouring state until a stopping condition is met.
Working Principle
1. Initialization: Start with a point x in the search space.
2. Perturbation: Generate a new point x′ by making a small change in x.
3. Evaluation: Compare x′ with x using the evaluation function.
4. Move:
o If x′ is better, replace x with x′.
o Else, try another neighbour.
5. Termination: Stop when-
o No improvement can be made, or
o A maximum number of iterations is reached, or
o A goal point is found.
➢ A standard (simple) hill climbing procedure:
Simulated Annealing
Simulated Annealing (SA) is a smart way to search for the best solution to a problem.
It’s based on how metal is made stronger: heat it up, let particles move freely, then cool it
down slowly so they settle into the best, most stable structure.
Working Principle
1. Start with a guess for the solution and set a high “temperature” (just a number, not
real heat).
2. Make small changes to the solution to see if you get a better one.
3. If the new solution is better, accept it.
4. If it’s worse, sometimes still accept it - especially when the temperature is high. This
helps you avoid getting stuck in “okay but not great” solutions.
5. Lower the temperature slowly as you keep trying new solutions.
6. As the temperature gets low, the algorithm becomes pickier and stops accepting bad
moves.
Aspect Hill Climbing Simulated Annealing
Starts with a solution, repeatedly Starts with a solution at high
Working moves to the best neighbouring “temperature.” Accepts better solutions,
Principle solution. Always accepts better and sometimes worse ones (probability
solutions, rejects worse ones. decreases as temperature cools).
Explores only locally around current Can explore widely at the start, then
Exploration
solution. narrows down as temperature drops.
Never accepts worse solutions, so it May accept worse solutions early on to
Risk-taking
can get stuck in local optima. escape local optima.
Can escape local optima.
Simple and easy to implement.
Advantages Suitable for complex, multi-peak
Fast for smooth search spaces.
problems.
Slower due to gradual cooling.
Gets stuck in local optima.
Limitations Requires careful choice of temperature
Sensitive to starting point.
schedule.
Climbing uphill until no higher step is Heating and slowly cooling metal so
Analogy
found. particles settle into the best arrangement.
Q) Describe the main concepts of evolutionary biology that inspired evolutionary
computing.
Evolutionary biology is the branch of science that studies how living things change over time,
why they are different or similar, and how they adapt to their environments. It is important for
many areas from understanding how diseases spread to improving crops.
Over the last 60 years, computer scientists noticed that the way nature evolves can also be
used to solve complex real-world problems. This idea gave birth to evolutionary computing.
What does “evolution” mean?
The word comes from the Latin evolvere, meaning “to unfold” or “unroll.”
In science, evolution means descent with modification: populations of living things change
over generations, often becoming more diverse. It is not about changes in one individual
during its lifetime, but about how groups change as traits are passed on and altered over many
generations.
• Examples of evolutionary systems: languages, car designs, recipes, and of course,
living organisms.
Key concepts from evolutionary biology that inspired computing:
1. Populations – Evolution involves groups of individuals, not just single organisms. In
computing, a population is a group of possible solutions.
2. Reproduction – Individuals produce offspring (sexually or asexually). In computing,
new solutions are generated from existing ones.
3. Variation – Not all individuals are the same; they have differences in traits. In
computing, solutions vary through changes (mutations or recombination).
4. Hereditary similarity – Offspring look like their parents but with some changes. In
computing, new solutions inherit features from parent solutions.
5. Sorting of variations – Traits that help survival (or problem-solving) are more likely
to be kept:
o Chance – Some traits survive or disappear randomly.
o Natural selection – Helpful traits consistently give an advantage and spread.
6. Adaptation – Over generations, variation + selection leads to better survival or
performance. In computing, this means the algorithm produces better solutions over
time.
Historical background:
• Charles Darwin & Alfred Wallace – Developed the theory of natural selection,
where the best-adapted individuals are more likely to survive and reproduce, passing
on their advantages.
• Jean-Baptiste Lamarck – Suggested the inheritance of acquired characteristics, like
giraffes getting longer necks because they stretch for leaves and passing that trait to
offspring. This idea is mostly disproven today, but it influenced early evolutionary
thought.
Connection to Evolutionary Computing:
In evolutionary computing:
• Individuals = possible solutions to a problem
• Populations = a set of solutions considered at the same time
• Reproduction & variation = creating new solutions by combining and mutating
existing ones
• Selection = keeping the best solutions and removing the worst
• Generations = repeating the process until solutions become very good (adapted)
This approach mimics nature’s trial-and-error process, making it powerful for solving
problems where the best answer isn’t obvious.
Q) Define evolutionary computing and explain its general working process.
Evolutionary Computing (EC) is a field of research that develops problem-solving algorithms
inspired by the process of natural evolution. These algorithms, called Evolutionary
Algorithms (EAs), use ideas from Darwin’s theory — such as reproduction, mutation, and
natural selection to search for good solutions to complex problems.
The idea started in the 1950s–1960s, when scientists realized that the principles of biological
evolution could be simulated on a computer to solve a wide range of problems, from planning
to control systems.
Examples of EC approaches include:
• Genetic Algorithms (GAs)
• Evolution Strategies (ES)
• Evolutionary Programming (EP)
• Genetic Programming (GP)
The standard process mimics how species evolve in nature:
1. Population Initialization
o Start with a population of individuals (possible solutions), usually created
randomly.
o Each individual represents a point in the search space: a possible answer to
the problem.
2. Evaluation (Fitness Function)
o Each individual is tested using an evaluation function (fitness function) to
measure how good it is at solving the problem.
o Higher fitness = better performance.
3. Selection
o Compare individuals based on fitness and select the best to reproduce.
o This mimics natural selection, where better-adapted individuals survive.
4. Reproduction
o Create new individuals from the selected ones:
▪ Crossover – combine traits from two parents.
▪ Cloning – copy a single parent.
5. Genetic Variation
o Apply mutation to some individuals to introduce new traits and explore new
areas of the search space.
6. Replacement
o The new offspring replace the old population.
o This forms a new generation.
7. Repeat
o Steps 2–6 are repeated until a stopping condition is met (e.g., reaching a
maximum number of generations or finding a solution good enough).
A standard evolutionary algorithm:
Q) List and briefly describe other main evolutionary algorithms apart from genetic
algorithms.
Evolution Strategies
Evolution Strategies are a type of evolutionary algorithm inspired by the principles of natural
evolution. They were originally developed for engineering problems and are especially
effective in optimizing continuous variables: values that can take any number, unlike genetic
algorithms which often use binary strings. ES focuses on improving solutions over
generations by mimicking how species adapt and evolve in nature.
How it Works:
1. Initialize Population: Create a population of candidate solutions, each representing a
possible answer to the problem, usually with real-number values corresponding to
system parameters.
2. Mutation: Apply small random changes to the candidate solutions to explore new
possibilities; ES relies heavily on mutation and may not use crossover.
3. Evaluation: Use a fitness function to measure how well each solution solves the
problem (e.g., minimizing cost, maximizing efficiency).
4. Selection: Choose the best-performing solutions to form the next generation, ensuring
gradual improvement over time.
5. Iteration: Repeat mutation, evaluation, and selection until a stopping criterion is met,
such as achieving satisfactory performance or completing a set number of generations.
Applications:
Evolution Strategies are widely used in fields that involve engineering and optimization
problems. Some common applications include:
• Engineering Design: Optimizing mechanical or structural components for efficiency
and strength.
• Structural Optimization: Designing structures that minimize material use while
maintaining safety.
• Robotics: Tuning control parameters for robot movement or behaviour.
• Control Systems: Improving the performance of automatic controllers for machines
and processes.
Key Points:
• ES focuses on real-valued parameters rather than binary or discrete values.
• It relies heavily on mutation, sometimes avoiding crossover entirely.
• The algorithm is well-suited for complex problems where continuous optimization is
critical.
Evolutionary Programming (EP)
Evolutionary Programming is an evolutionary algorithm inspired by the idea of behavioural
evolution rather than genetics. Unlike genetic algorithms that focus on the combination of
parent traits, EP emphasizes the gradual improvement of individual behaviours over time. It
was originally developed for studying and simulating the evolution of strategies and
behaviours in nature.
How it Works:
1. Initialize Population: Start with a population of candidate solutions, each representing
a possible way to solve the problem (strategies, parameters, or behaviour’s).
2. Mutation: Each solution mutates independently, introducing small random changes to
explore new possibilities. EP usually does not use crossover.
3. Evaluation: Evaluate the fitness of all solutions (parents and offspring) using a fitness
function to measure how well they solve the problem.
4. Selection: Select the best solutions based on fitness to form the next generation.
5. Iteration: Repeat mutation, evaluation, and selection until a stopping criterion is met
(satisfactory solution or maximum generations).
Genetic Programming (GP)
Genetic Programming is an evolutionary algorithm inspired by the process of natural
evolution, but instead of evolving individuals or numbers, it evolves computer programs or
expressions. The main idea is to automatically discover programs or formulas that can solve a
given problem, even when the exact solution structure is unknown.
How it Works:
1. Initialize Population: Create a population of candidate programs, often represented as
tree structures, where internal nodes are functions or operations and leaves are inputs,
constants, or variables.
2. Evaluation: Use a fitness function to measure how well each program performs the
task, considering criteria like accuracy, efficiency, or problem-specific goals.
3. Genetic Operators: Apply crossover to swap subtrees between parent programs and
mutation to introduce small random changes, generating new candidate programs.
4. Selection: Choose the best-performing programs based on fitness to form the next
generation.
5. Iteration: Repeat evaluation, genetic operations, and selection until a stopping
criterion is met, gradually improving program performance over generations
Q) Explain the transition from evolutionary biology concepts to computing applications.
The transition from evolutionary biology to computing applications involves translating
concepts from natural evolution into algorithms that can solve computational problems. In
nature, evolution operates through processes like selection, mutation, and recombination,
acting on genes within chromosomes to produce fitter organisms over generations.
Evolutionary computing applies these same ideas to problem-solving in a digital
environment.
Interpretation of the biological terminology into the computational world for genetic
algorithms:
Q) Discuss the scope of evolutionary computing and the relevance of the No Free Lunch
theorem.
The scope of evolutionary computing (EC) is broad, encompassing a wide range of problem-
solving domains where traditional methods may struggle. Evolutionary algorithms (EAs) are
particularly effective in situations where the search space is large, complex, or poorly
understood, where the fitness landscape may be noisy, or where the problem is neither
perfectly smooth nor unimodal. In such cases, the stochastic and population-based nature of
EAs allows them to explore many potential solutions simultaneously and adaptively, making
them competitive approaches for optimization and search tasks.
However, if the search space is smooth or unimodal, classical methods like gradient-based or
hill-climbing approaches tend to outperform EAs, as these methods can exploit the structure
of the problem more efficiently. When the problem structure is well understood, as in
combinatorial problems like the Traveling Salesman Problem (TSP), heuristics and problem-
specific adaptations can be incorporated into EAs to enhance performance.
Beasley (2000) categorized the applications of EAs into five broad areas:
1. Planning: Includes routing, scheduling, and packing problems.
2. Design: Such as signal processing and other engineering designs.
3. Simulation, Identification, and Control: General plant or system control
applications.
4. Classification: Machine learning, pattern recognition, and classification tasks.
5. Creative and Specialized Applications: Including art and music composition,
robotics, language processing, electronics, engineering, data mining, and industry-
specific optimization.
The No Free Lunch (NFL) theorem emphasizes that no single algorithm can outperform all
others across every possible problem. This means that while EAs are powerful and flexible,
their effectiveness is problem-dependent. The NFL theorem underlines that evolutionary
algorithms are best suited for problems where the search space is complex, poorly
understood, or noisy, but they are not universally superior.
In summary, evolutionary computing is highly relevant in domains requiring adaptive,
heuristic-driven search, but its success relies on matching the algorithm’s strengths to the
characteristics of the problem.
SUMMARY OF THE CHAPTER
What are Evolutionary Algorithms (EAs)?
Evolutionary Algorithms are computer methods inspired by the way living things evolve in
nature. Just like in nature, where animals and plants adapt over generations, EAs try to
improve solutions to problems over many “generations.”
• Population of Solutions: EAs work with a group (population) of possible solutions
instead of just one. Each solution is like an individual in nature.
• Fitness: Each solution is tested to see how good it is at solving the problem. This
“goodness” is called fitness. Better solutions have higher fitness.
• Environment: The problem itself acts like the environment in nature. Solutions that
“fit” better survive and improve over generations.
How EAs Work (Step by Step)?
1. Create Initial Population:
o Start with a set of random solutions. Each solution is like a guess at solving
the problem.
2. Reproduction:
o Make copies of some solutions. Good solutions are more likely to be copied.
3. Variation (Mutation and Crossover):
o Mutation: Make small random changes to a solution. This is like a random
tweak that might make it better.
o Crossover (for some EAs like GA): Combine parts of two solutions to create a
new one, like mixing traits from two parents.
4. Evaluation:
o Check how good each solution is using a fitness function.
5. Selection:
o Choose the best solutions to form the next generation. Poor solutions are
discarded.
6. Repeat:
o Repeat the cycle of variation, evaluation, and selection until the algorithm
finds a good solution or reaches a limit.
Where EAs Come From?
• Inspired by Nature: They are based on Darwin’s idea of natural selection.
• Blind Watchmaker: Richard Dawkins described evolution as a “blind watchmaker.”
That means there’s no intelligent designer: improvements happen automatically over
time through selection.
• Blind Search: EAs don’t know the best solution at the start. They find solutions by
testing, improving, and selecting the best candidates.
Why EAs Are Useful?
• They are general-purpose methods, meaning they can be used for many types of
problems.
• Work well for large, complex, or uncertain problems where traditional methods might
fail.
• Can adapt to changing environments, just like living organisms adapt to new
conditions.
• Can find solutions without needing to know the answer in advance, making them very
flexible.
Types of Evolutionary Algorithms
1. Genetic Algorithms (GA):
o Use crossover and mutation.
o Often applied to optimization and search problems.
2. Evolution Strategies (ES):
o Focus mainly on mutation.
o Good for continuous optimization problems.
3. Evolutionary Programming (EP):
o Focuses on evolving behaviour’s or strategies.
o Relies on mutation and selection, not crossover.
4. Genetic Programming (GP):
o Evolves computer programs or formulas automatically.
o Uses crossover and mutation to improve programs over generations.
Key Points to Remember
• EAs are iterative, meaning they improve solutions step by step.
• They are population-based, working with many solutions at once.
• They perform a blind search: solutions evolve without knowing the best answer in
advance.
• They are adaptive, meaning they can handle complex, changing, or uncertain
problems.