0% found this document useful (0 votes)
148 views102 pages

Introduction To Artificial Intelligence

This document contains the material of Artificial Intelligence subject of Regulation R22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views102 pages

Introduction To Artificial Intelligence

This document contains the material of Artificial Intelligence subject of Regulation R22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

B B V Raju Institute of Technology

Vishnupur, Narsapur, Medak District (Autonomous)

Introduction to Artificial Intelligence


Department of Computer Science and
Engineering

Authored by
Ayesha Naureen M.Tech (Ph.D)
Assistant Professor
CSE Dept
BVRIT
AY: 2024-2025 SEM-I, Regulation- 22

II B.Tech
Syllabus:
UNIT I
Introduction:
Artificial Intelligence
What is Artificial Intelligence?
According to the father of Artificial Intelligence, John McCarthy, it is “The science and
engineering of making intelligent machines, especially intelligent computer programs”.
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a
software think intelligently, in the similar manner the intelligent humans think. AI is
accomplished by studying how the human brain thinks, and how humans learn, decide, and
work while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.
AI Approaches Cognitive science, Laws of thought, Turing Test, Rational agent.

AI Techniques Techniques that make system to behave as Intelligent, Describe and match,
Goal reduction, Constraint satisfaction, Tree Searching, Generate and test, Rule based
systems, Biology-inspired AI techniques Neural Networks, Genetic Algorithms,
Reinforcement learning.

Branches of AI Logical AI, Search in AI, Pattern Recognition, Knowledge Representation,


Inference, Commonsense knowledge and reasoning, Learning, Planning, Epistemology,
Ontology, Heuristics, Genetic programming.

Philosophy of AI
● While exploiting the power of the computer systems, the curiosity of humans, led him to
wonder, “Can a machine think and behave like humans do?”
● Thus, the development of AI started with the intention of creating similar intelligence in
machines that we find and regard high in humans.
Goals of AI
● To Create Expert Systems − The systems which exhibit intelligent behaviour, learn,
demonstrate, explain, and advice its users
.● To Implement Human Intelligence in Machines − Creating systems that understand, think,
learn, and behave like humans.
General AI Goal, Engineering based AI Goal, Science-based AI Goal.

Types of artificial intelligence—weak AI vs. strong AI


● Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)— is AI trained
and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today.
‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it
enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson,
and autonomous vehicles.
● Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super
Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of
AI where a machine would have an intelligence equalled to humans; it would have a self-
aware consciousness that has the ability to solve problems, learn, and plan for the future.
Historical Backdrop & Future
Artificial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical
men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of
AI which defines the journey from the AI generation to till date development.
Maturation of Artificial Intelligence (1943-1952)
Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in which
he proposed a test. The test can check the machine's ability to exhibit intelligent behaviour
equivalent to human intelligence, called a Turing test.
The Birth of Artificial Intelligence (1952-1956) Year 1955:
An Allen Newell and Herbert A. Simon created the "first artificial intelligence program
“Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics
theorems and find new and more elegant proofs for some theorems.
Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist
John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic
field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.
The Golden Years-Early Enthusiasm (1956-1974)
Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
The First AI Winter (1974-1980)
The duration between years 1974 to 1980 was the first AI winter duration.
AI winter refers to the time where computer scientist dealt with a severe shortage of funding
from government for AI research.
During AI winters, an interest of publicity on artificial intelligence was decreased.
A Boom of AI (1980-1987) Year 1980:
After AI winter duration, AI came back with "Expert System". Expert systems were
programmed that emulate the decision-making ability of a human expert. In the Year 1980,
the first national conference of the American Association of Artificial Intelligence was held
at Stanford University.
The Second AI winter (1987-1993)
The duration between the years 1987 to 1993 was the second AI Winter duration.
Again, Investors and government stopped in funding for AI research as due to high cost but
not efficient result. The expert system such as XCON was very cost effective.
The Emergence of Intelligent Agents (1993-2011) Year 1997:
In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became
the first computer to beat a world chess champion.
Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
Deep Learning, Big Data and Artificial General Intelligence (2011-Present) Year 2011:
In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the
complex questions as well as riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
Year 2012: Google has launched an Android app feature "Google now", which was able to
provide information to the user as a prediction.
Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
Google has demonstrated an AI program "Duplex" which was a virtual assistant, and which
had taken hairdresser appointment on call, and lady on other side didn't notice that she was
talking with the machine.

Future of Artificial Intelligence


1. Healthcare:
● The healthcare industry is bound to witness a drastic change.
● Artificially Intelligent robots would be employed for performing complex surgeries with a
high degree of precision. It puts consumers on top of things like health and well-being.
● Additionally, AI increases the power for healthcare professionals to try to understand the
day-to-day patterns and wishes of the people they look after.
● Thereupon understanding they’re ready to provide better feedback, guidance and support
for staying healthy.
● AI would also be developed in wearable devices like watches and wrist bands to monitor
the human body and predict any diseases if any.
2. Autonomous Vehicles
● Everyone these days is all hyped about “Autonomous Vehicles”. Level 2.0 Autonomy has
already been achieved by Tesla. Autonomous driving is one among the key application areas
of AI.
● Once autonomous vehicles would be on road, cab services like Uber and Ola would be
driverless. This would change the way transport industry functions.
● The Autonomous Vehicle Market being driven by AI is projected to have a valuation of
$127 billion by 2025.
3. Security and Defence
● The utilization cases for AI in defence and security are virtually unlimited.
● AIs are often embedded into weapons and surveillance systems to reinforce performance. It
is often to improve target recognition, combat simulation and training, and threat monitoring.
● Most importantly, the critical and risky jobs of securing the borders of the country can be
delegated to artificially intelligent robots, unmanned aircraft, and drones, UAVs, etc.
● This would reduce the risk of life for the soldiers on the borders and provide better
surveillance measures using evolved Facial Recognition Technologies.
4. Manufacturing
● Manufacturing in the near future would be fully automated. AI also optimizes
manufacturing supply chains, helping companies anticipate market changes. This information
is invaluable to manufacturers because it allows them to optimize staffing, internal control,
energy consumption and therefore the supply of raw materials
● The manufacturing processes enabled by Artificially Intelligent Systems would be able to
not just perform the required processes but also be able to inspect, improve, and quality
checks the products without any human intervention.
● In addition, according to the reports of Markets and markets, Artificial Intelligence in the
manufacturing market is expected to grow from USD 1.0 billion in 2018 to USD 17.2 billion
by 2025, at a CAGR of 49.5% during the forecast period.
5. Education
● The future of classrooms is digital. Already, there are a lot of courses on platforms which
are highly informative and can be accessed from anywhere, anytime. AI can automate the
expedition of administrative duties for teachers and academic institutions.
● Educators spend tons of their time on grading exams, assessing homework, and providing
valuable responses to their students. AI is allowing automation of classification and
processing of paperwork. ● The concept of education will be redefined from the comfort of
the homes, personalized according to every students’ needs with the help of Artificially
Intelligent systems.
● According to a report by Market Research Future (MRFR), the AIenabled Ed-tech sector is
projected to grow to USD 2 billion, at a CAGR of 38% during 2018-2023.
6. Entertainment
● Already, the OTT’s like Netflix and Amazon Prime are rapidly increasing their user base.
Smart algorithms are going to be ready to come up with the simplest marketing and
advertising solutions.
● With AI, it’s possible to form all marketing processes a couple of times faster by utilizing
predictive analytics.
● In the future, the AI will be able to predict not just our preference, but also our mood and
display content according to it.
● Virtual Reality enabled sci-fi content and games can be a part of sources of entertainment
in the future.
● By 2021, the U.S. E&M industry is projected to reach $759 billion in revenue, increasing at
a compound annual growth rate (CAGR) of 3.6 percent.
7. Workplace
● Businesses are using AI to enhance the productivity of their employees.
● The advantage of AI for business is that it handles repetitive tasks across a corporation in
order that employees can specialize in creative solutions, complex problem solving, and
impactful work.
● One example of that is chatbot.
● The concept of the workplace will also be redefined by the advent of technologies. The
future workplace will not just be global in nature but also highly flexible.
● The concept of Work from Home will be the new norm and digital conferences and
meetings would be the accepted practice. This would cause the commercial real estate spaces
to witness a drop in their demands.
8. Banking and Finance
● Artificial Intelligence is the best way forward for banking because it brings the facility of
advanced data analytics to combat fraudulent transactions and improve compliance.
● AI assistants, like chatbots, use AI to get personalized financial advice and work on
processing to supply instant, self-help customer service. With increasingly efficient and
intelligent systems, it becomes easier to detect and stop fraud in the banking sector.
● For Example, Artificially Intelligent Systems can help in high-frequency trading, enhanced
customer service and better decision making due to the faster processing of a large amount of
data
AI Agent:
An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking, and
acting.
o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and
hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input and act
on those inputs and display output on the screen.
An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.
A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the
sensors, and other organs such as hands, legs, mouth, for effectors.
A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.
A software agent has encoded bit strings as its programs and actions.
Agent Terminology
Performance Measure of Agent − It is the criteria, which determines how successful an agent
is. Behaviour of Agent − It is the action that agent performs after any given sequence of
percepts. Percept − It is agent’s perceptual inputs at a given instance.
Percept Sequence − It is the history of all that an agent has perceived till date.
Agent Function − It is a map from the precept sequence to an action.
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.
Rationality is concerned with expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful information is an important
part of rationality.
What is Ideal Rational Agent?
An ideal rational agent is the one, which is capable of doing expected actions to maximize its
performance measure, on the basis of –
Its percept sequence, Its built-in knowledge base.
Rationality of an agent depends on the following −
The performance measures, which determine the degree of success.
 Agent’s Percept Sequence till now.
 The agent’s prior knowledge about the environment
. The actions that the agent can carry out.
A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the agent
solves is characterized by Performance Measure, Environment, Actuators, and Sensors
(PEAS). The Structure of Intelligent Agents Agent’s structure can be viewed as –
Agent = Architecture + Agent Program
 Architecture = the machinery that an agent executes on.
 Agent Program = an implementation of an agent function.
Simple Reflex Agents
They choose actions only based on the current percept.
 They are rational only if a correct decision is made only on the basis of current precept.
Their environment is completely observable.
Condition-Action Rule − It is a rule that maps a state (condition) to an action.

Model Based Reflex Agents


They use a model of the world to choose their actions.
They maintain an internal state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current state depending on
percept history.
Updating the state requires the information about − How the world evolves.
How the agent’s actions affect the world.

Goal Based Agents


They choose their actions in order to achieve goals. Goal-based approach is more flexible
than reflex agent since the knowledge supporting a decision is explicitly modelled, thereby
allowing for modifications.
Goal − It is the description of desirable situations.

Utility Based Agents


They choose actions based on a preference (utility) for each state. Goals are inadequate when
− There are conflicting goals, out of which only few can be achieved. Goals have some
uncertainty of being achieved and you need to weigh likelihood of success against the
importance of a goal.
The Nature of Environments: Some programs operate in the entirely artificial environment
confined to keyboard input, database, computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich, unlimited
softbots domains.. The most famous artificial environment is the Turing Test environment, in
which one real and other artificial agents are tested on equal ground. The success of an
intelligent behavior of a system can be measured with Turing Test.

The environment has multifold properties –


Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of the
environment, the environment is discrete (For example, chess); otherwise it is continuous
(For example, driving). Observable / Partially Observable − If it is possible to determine the
complete state of the environment at each time point from the percepts it is observable;
otherwise it is only.

Applications of AI

Game playing, Speech Recognition, Natural Language processing, Computer Vision, Expert
Systems.

Game Playing:-
Game playing is a popular application of artificial intelligence that involves the
development of computer programs to play games, such as chess, checkers, or Go. The goal
of game playing in artificial intelligence is to develop algorithms that can learn how to play
games and make decisions that will lead to winning outcomes.

Speech recognition:

Speech recognition is the process of identifying a human voice. Typically, businesses create
these programs and integrate them into various hardware devices to identify speech. When
the program hears your voice or receives your order, it will respond appropriately.

Numerous businesses create software that recognizes speech using cutting-edge technologies
like artificial intelligence, machine learning, and neural networks. The way individuals utilize
hardware and electrical devices has been changed by technologies like Siri, Amazon, Google
Assistant, and Cortana. They include smartphones, devices for home security, cars, etc.

Natural Language Processing (NLP):


NLP focuses on enabling computers to understand and process human language. AI-
powered NLP techniques have paved the way for applications like language translation,
sentiment analysis, chatbots, and voice assistants. With AI algorithms, computers can now
extract meaning from text,

Computer Vision:
AI algorithms have transformed computer vision by enabling machines to analyze and
understand images and videos. Applications like object detection, facial recognition,
autonomous vehicles, and medical imaging heavily rely on AI in computer vision.

State space search:

1. Search - Searching solves a search issue in a given space step by step. Three major
factors can influence a search issue.

 Search Space - A search space is a collection of potential solutions a system may


have.
 Start State - The jurisdiction where the agent starts the search.
 Goal test - A function that examines the current state and returns whether or not the
goal state has been attained.

2. Search tree - A Search tree is a tree representation of a search issue. The node at the
root of the search tree corresponds to the initial condition.
3. Actions - It describes all the steps, activities, or operations accessible to the agent.
4. Transition model - It can be used to convey a description of what each action does.
5. Path Cost - It is a function that gives a cost to each path.
6. Solution - An action sequence connects the start node to the target node.
7. Optimal Solution - If a solution has the lowest cost among all solutions, it is said to
be the optimal answer.

Defining the problem as State Space Search:


The state space representation forms the basis of most of the AI methods. Formulate a
problem as a state space search by showing the legal problem states, the legal operators, and
the initial and goal states. A state is defined by the specification of the values of all attributes
of interest in the world
An operator changes one state into the other; it has a precondition which is the value of
certain attributes prior to the application of the operator, and a set of effects, which are the
attributes altered by the operator The initial state is where you start The goal state is the
partial description of the solution Formal

Description of the problem:


1. Define a state space that contains all the possible configurations of the relevant objects.
2. Specify one or more states within that space that describe possible situations from which
the problem solving process may start ( initial state)
3. Specify one or more states that would be acceptable as solutions to the problem. ( goal
states) Specify a set of rules that describe the actions (operations) available State-Space

Problem Formulation:
Example:
A problem is defined by four items:
1. initial state e.g., "at Arad―
2. actions or successor function : S(x) = set of action–state pairs e.g., S(Arad) = {, … }
3. goal test (or set of goal states) e.g., x = "at Bucharest‖, Checkmate(x)
4. path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x,a,y) is the
step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state
to a goal state.

Advantages of State Space Search:

1. It is very useful in AI because of it provides a set of all possible states, operations and

goals.

2. If the entire state space for a problem is given then it is possible to trace the path from

the initial to goal state and identify the sequence of operation required for doing it.

Disadvantages of State Space Search:

1. It is not possible to visualize all states for a problem.

2. The resources of the computer system are very limited to handle huge combinational

state-space.

Properties of Search Algorithms


The four important properties of search algorithms in artificial intelligence for comparing
their efficiency are as follows:

1. Completeness - A search algorithm is said to be complete if it guarantees to yield a


solution for any random input if at least one solution exists.
2. Optimality - A solution discovered for an algorithm is considered optimal if it is
assumed to be the best solution (lowest path cost) among all other solutions.
3. Time complexity - It measures how long an algorithm takes to complete its job.
4. Space Complexity - The maximum storage space required during the search, as
determined by the problem's complexity.

These characteristics often contrast the effectiveness of various search algorithms in artificial
intelligence.

Problem: Eight puzzle

We also know the eight puzzle problem by the name of N puzzle


problem or sliding puzzle problem.

N-puzzle that consists of N tiles (N+1 titles with an empty tile) where N can be 8, 15,

24 and so on.

In our example N = 8. (that is square root of (8+1) = 3 rows and 3 columns).

In the same way, if we have N = 15, 24 in this way, then they have Row and columns

as follow (square root of (N+1) rows and square root of (N+1) columns).

That is if N=15 than number of rows and columns= 4, and if N= 24 number of rows

and columns= 5.

So, basically in these types of problems we have given a initial state or initial

configuration (Start state) and a Goal state or Goal Configuration.

Here We are solving a problem of 8 puzzle that is a 3x3 matrix.


Problem: Man, Lion, Goat, Cabbage

-------------------------------------------------------------------------------------------------------------

Generate and Test:


Generate and Test Search is a heuristic search technique based on Depth First Search with
Backtracking which guarantees to find a solution if done systematically and there exists a
solution
1. Generate a possible solution. For example, generating a particular point in the problem
space or generating a path for a start state.
2. Test to see if this is a actual solution by comparing the chosen point or the endpoint of the
chosen path to the set of acceptable goal states
3. If a solution is found, quit. Otherwise go to Step 1

Potential solutions that need to be generated vary depending on the kinds of problems
• For some problems the possible solutions may be particular points in the problem space and
for some problems, paths from the start state
Generate-and-test, like depth-first search, requires that complete solutions be generated for
testing
• In its most systematic form, it is only an exhaustive search of the problem space.
• Solutions can also be generated randomly but solution is not guaranteed
• This approach is what is known as British Museum algorithm: finding an object in the
British Museum by wandering randomly.
While generating complete solutions and generating random solutions are the two extremes
there exists another approach that lies in between
The approach is that the search process proceeds systematically but some paths that unlikely
to lead the solution are not considered.
• This evaluation is performed by a heuristic function
Importance of Search Algorithms in Artificial Intelligence

The following points explain how and why the search algorithms in AI are important:

 Solving problems: Using logical search mechanisms, including problem description,


actions, and search space, search algorithms in artificial intelligence improve
problem-solving. Applications for route planning, like Google Maps, are one real-
world illustration of how search algorithms in AI are utilized to solve problems. These
programs employ search algorithms to determine the quickest or shortest path
between two locations.
 Search programming: Many AI activities can be coded in terms of searching, which
improves the formulation of a given problem's solution.
 Goal-based agents: Goal-based agents' efficiency is improved through search
algorithms in artificial intelligence. These agents look for the most optimal course of
action that can offer the finest resolution to an issue to solve it.
 Support production systems: Search algorithms in artificial intelligence help
production systems run. These systems help AI applications by using rules and
methods for putting them into practice. Production systems use search algorithms in
artificial intelligence to find the rules that can lead to the required action.
 Neural network systems: The neural network systems also use these algorithms.
These computing systems comprise a hidden layer, an input layer, an output layer, and
coupled nodes. Neural networks are used to execute many tasks in artificial
intelligence. For example, the search for connection weights that will result in the
required input-output mapping is improved by search algorithms in AI.

---------------------------------------------------------------------------------------------------------
Simple search
Simple search problem 1:

Algorithm:
SimpleSearch()
Open{start}
While open is not empty
Do pick some node n from open
Openopen\{n}
If GoalTest(n)=True
Then return n
Else open  open U MoveGen(n)
Return FAILURE
Simple Search problem 2:
Algorithm:

----------------------------------------------------------------------------------------------------------
Search:
Search is the systematic examination of states to find path from the start/root state to the goal
state. Many traditional search algorithms are used in AI applications. For complex problems,
the traditional algorithms are unable to find the solution within some practical time and space
limits.
Consequently, many special techniques are developed; using heuristic functions. The
algorithms that use heuristic functions are called heuristic algorithms. Heuristic algorithms
are not really intelligent; they appear to be intelligent because they achieve better
performance. Heuristic algorithms aremore efficient because they take advantage of feedback
from the data to direct the search path.

Types of Search Algorithms in AI

We can divide search algorithms in artificial intelligence into uninformed (Blind search) and
informed (Heuristic search) algorithms based on the search issues.

Uninformed/Blind Search

The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies a
way in which search tree is searched without any information about the search space like
initial state operators and test for the goal, so it is also called blind search.It examines each
node of the tree until it achieves the goal node.

It can be divided into six main types:


o Breadth-first search
o Uniform cost search
o Depth-first search
o Depth limited search
o Iterative deepening depth-first search
o Bidirectional Search

Role of Uniformed Search Algorithm in AI


1. Path finding: One of the most common applications of uniformed search algorithms is
in path finding problems, such as finding the shortest path between two points on a map.
Algorithms like Breadth-First Search (BFS) and Depth-First Search (DFS) are used to
explore the map and find the optimal path.
2. Puzzle Solving: Uniformed search algorithms are also used in puzzle-solving problems,
such as the Eight Puzzle or the Fifteen Puzzle. These algorithms can be used to find a
sequence of moves that lead to the solution of the puzzle.
3. Game Playing: In game-playing AI, uniformed search algorithms can be used to explore
the game tree and find the optimal sequence of moves to win the game. Algorithms like
DFS and BFS are often used in conjunction with minimax or alpha-beta pruning to
improve efficiency.
4. Robot Navigation: Uniformed search algorithms can be used in robotics for path
planning and navigation tasks. Robots can use these algorithms to explore their
environment and find the best path to reach a target location while avoiding obstacles.
5. Web Crawling: Web crawlers use uniformed search algorithms to explore the web and
index web pages. BFS is commonly used in web crawling to systematically visit web
pages and follow links to discover new pages.

Informed Search

Informed search algorithms use domain knowledge. In an informed search, problem


information is available which can guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy. Informed search is also called a
Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search
Application of Informed Search Algorithms
Informed search algorithms are extensively used in various applications, such as:
1. Path finding in Navigation Systems: Used to calculate the shortest route from a point A
to point B on a map.
2. Game Playing: Helps in determining the best moves in games like chess or Go by
evaluating the most promising moves first.
3. Robotics: Utilized in autonomous navigation where a robot must find the best path
through an environment.
4. Problem Solving in AI: Applied to a range of problems from scheduling and planning
to resource allocation and logistics.

--------------------------------------------------------------------------------------------------------------
-----------------------
Depth first search (DFS)
o Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation. Last in first out (LIFO)
o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.

Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
o With the help of this we can stores the route which is being tracked in memory to save
time as it only needs to keep one at a particular time.

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no guarantee
of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
o The de¬pth-first search (DFS) algorithm does not always find the shorte¬st path to a
solution.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after traversing
E, it will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal
node.
Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or
high cost to reach to the goal node.

--------------------------------------------------------------------------------------------------------------

Breadth First search(BFS)


Breadth-First Search (BFS) begins at a chosen vertex and explores neighboring vertices
iteratively in a level-by-level manner. Here’s a basic outline of the BFS algorithm:

1. Choose a starting vertex and mark it as visited.


2. Enqueue the starting vertex into a queue data structure.
3. While the queue is not empty,
1. Dequeue a vertex from the front of the queue.
2. Visit the dequeued vertex and explore its adjacent vertices.
3. For each unvisited adjacent vertex, mark it as visited and enqueue it into
the queue.
Repeat step 3 until all reachable vertices are visited or the queue becomes empty.
If all reachable vertices are visited, terminate the algorithm. Otherwise, continue
exploring vertices by dequeuing from the queue.

Working Principle of BFS


Let’s see working principle of BFS for the following graph,
1. Initialization:

Start at the node 1.

Create a queue to help manage the nodes to be explored and initialize it


with the start node, in this case, queue = [1].

Mark node 1 as visited to avoid revisiting it visited = {1}.

Exploration:

Dequeue an element from the front of the queue. Here, we start with node
current_node = 1.

Examine each adjacent node of node 1. According to the graph, these


are nodes 2 and 3.

Checking and Enqueuing Neighbors of Node 1:

For each adjacent node:

If it has not been visited, mark it as visited and enqueue it.

Process node 2: Mark as visited (visited = {1, 2}) and enqueue


(queue = [2]).

Process node 3: Mark as visited (visited = {1, 2, 3}) and


enqueue (queue = [2, 3]).

Continuing with Node 2:

Dequeue node 2. current_node = 2

Examine its neighbors (nodes 6 and 4).


Node 6: Mark as visited (visited = {1, 2, 3, 6}) and enqueue
(queue = [3, 6]).

Node 4: Mark as visited (visited = {1, 2, 3, 6, 4}) and enqueue


(queue = [3, 6, 4]).

Continuing with Node 3:

Dequeue node 3. current_node = 3.

Examine its neighbors (nodes 4 and 5).

Node 4 is already visited, so ignore.

Node 5: Mark as visited (visited = {1, 2, 3, 6, 4, 5}) and


enqueue (queue = [6, 4, 5]).

Finishing Up:

Continue the process until the queue is empty, dequeuing nodes 6, 4, and
5 sequentially, each time checking their neighbors. Since all neighbors
will have been already visited by the time we process these nodes, no
new nodes are added to the queue.

Once the queue is empty, the BFS traversal is complete.

The final order of traversal in BFS starting from node 1 in this graph will be: 1, 2, 3, 6, 4, 5.
This order reflects how BFS explores all neighbors at the current depth before moving to
nodes at the next depth level.

Time Complexity of BFS


The time complexity of BFS algorithm is O(V + E), where V is the number of nodes and E is
the number of edges.

Space Complexity of BFS


The space complexity of BFS is O(∣V∣), where ∣V∣ represents the number of vertices in the
graph.
The distance between the nodes in layer 1 is comparitively lesser than the distance between
the nodes in layer 2. Therefore, in BFS, you must traverse all the nodes in layer 1 before you
move to the nodes in layer 2.
Traversing child nodes
A graph can contain cycles, which may bring you to the same node again while traversing the
graph. To avoid processing of same node again, use a boolean array which marks the node
after it is processed. While visiting the nodes in the layer of a graph, store them in a manner
such that you can traverse the corresponding child nodes in a similar order.
In the earlier diagram, start traversing from 0 and visit its child nodes 1, 2, and 3. Store them
in the order in which they are visited. This will allow you to visit the child nodes of 1 first
(i.e. 4 and 5), then of 2 (i.e. 6 and 7), and then of 3 (i.e. 7) etc.
To make this process easy, use a queue to store the node and mark it as 'visited' until all its
neighbours (vertices that are directly connected to it) are marked. The queue follows the First
In First Out (FIFO) queuing method, and therefore, the neigbors of the node will be visited in
the order in which they were inserted in the node i.e. the node that was inserted first will be
visited first, and so on.
Comparison of BFS and DFS
Depth Bounded DFS (DBDFS)
Depth-Limited Search Algorithm:
A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In
this algorithm, the node at the depth limit will treat as it has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
 Standard failure value: It indicates that problem does not have any solution.
 Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.

Depth Limited Search Algorithm


We are given a graph G and a depth limit 'l'. Depth Limited Search is carried out in the
following way:

1. Set STATUS=1(ready) for each of the given nodes in graph G.

2. Push the Source node or the Starting node onto the stack and set its STATUS=2(waiting).

3. Repeat steps 4 to 5 until the stack is empty or the goal node has been reached.

4. Pop the top node T of the stack and set its STATUS=3(visited).

5. Push all the neighbours of node T onto the stack in the ready state (STATUS=1) and with a
depth less than or equal to depth limit 'l' and set their STATUS=2(waiting).
(END OF LOOP)

6. END
When one of the following instances are satisfied, a Depth Limited Search can be terminated.
 When we get to the target node.

 Once all of the nodes within the specified depth limit have been visited.
Step by Step Example

Consider the given graph with Depth Limit(l)=2, Target Node=H and the given source
node=A

Step 1 :

Now, the first element of the source node is pushed onto the stack.

Step 2 :

A being the top element is now popped from the stack and the neighbouring nodes B and C at
depth=1(<l) of A are pushed onto the stack.

Traversal: A
Step 3 :

C being the topmost element is popped from the stack and the neighbouring node F at
depth=2(=l) is pushed onto the stack.

Traversal: AC
Step 4 :

F being the topmost element is popped from the stack and the neighbouring nodes I and J at
depth=3(>l) will not be pushed onto the stack.

Traversal: ACF
Step 5 :

B being the topmost element is popped from the stack and the neighbouring nodes D and E at
depth=2(=l) are pushed onto the stack.
Traversal: ACFB
Step 6 :

E being the topmost element is popped from the stack and since E has no neighbouring
nodes, no nodes are pushed onto the stack.

Traversal: ACFBE
Step 7 :

D being the topmost element is popped from the stack and the neighbouring nodes G and H at
depth=3(>l) will not be pushed onto the stack.

Traversal: ACFBED
Since the stack is now empty, all nodes within the depth limit have been visited, but the
target node H has not been reached.

Depth First Iterative Deepending (DFID)


Iterative deepeningdepth-first Search:
The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a
goal is found.
This algorithm performs depth-first search up to a certain “depth limit”, and it keeps
increasing the depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search’s fast search and depth-
first search’s memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.
Advantages:
 Itcombines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages:
 The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
1’st Iteration—–> A
2’nd Iteration—-> A, B, C
3’rd Iteration——>A, B, D, E, C, F, G
4’th Iteration——>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Completeness:
This algorithm is complete is ifthe branching factor is finite.
Time Complexity:
Let’s suppose b is the branching factor and depth is d then the worst-case time complexity
is O(bd).
Space Complexity:
The space complexity of IDDFS will be O(bd).
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the
node.
-------------------------------------------------UNIT I ENDS---------------------------------------------
UNIT II
Heuristic Search: Heuristic Functions, Best First Search, Hill climbing, Local Maxima,
Solution Space search, variable neighbourhood descent, Beam Search, Tabu Search,
Peak to Peak methods.
Heuristic Search:
Heuristics operates on the search space of a problem to find the best or closest-to-optimal
solution via the use of systematic algorithms. In contrast to a brute-force approach, which
checks all possible solutions exhaustively, a heuristic search method uses heuristic
information to define a route that seems more plausible than the rest. Heuristics, in this
case, refer to a set of criteria or rules of thumb that offer an estimate of a firm’s
profitability. Utilizing heuristic guiding, the algorithms determine the balance between
exploration and exploitation, and thus they can successfully tackle demanding issues.
Therefore, they enable an efficient solution finding process.
Significance of Heuristic Search in AI
The primary benefit of using heuristic search techniques in AI is their ability to handle
large search spaces. Heuristics help to prioritize which paths are most likely to lead to a
solution, significantly reducing the number of paths that must be explored. This not only
speeds up the search process but also makes it feasible to solve problems that are otherwise
too complex to handle with exact algorithms.

Components of Heuristic Search

Heuristic search algorithms typically comprise several essential components:


1. State Space: This implies that the totality of all possible states or settings, which is
considered to be the solution for the given problem.
2. Initial State: The instance in the search tree of the highest level with no null values,
serving as the initial state of the problem at hand.
3. Goal Test: The exploration phase ensures whether the present state is a terminal or
consenting state in which the problem is solved.
4. Successor Function: This create a situation where individual states supplant the current
state which represent the possible moves or solutions in the problem space.
5. Heuristic Function: The function of a heuristic is to estimate the value or distance from
a given state to the target state. It helps to focus the process on regions or states that has
prospect of achieving the goal.

Heuristic functions are strategies or methods that guide the search process in AI
algorithms by providing estimates of the most promising path to a solution. They are often
used in scenarios where finding an exact solution is computationally infeasible. Instead,
heuristics provide a practical approach by narrowing down the search space, leading to
faster and more efficient problem-solving.
Heuristic functions transform complex problems into more manageable subproblems by
providing estimates that guide the search process. This approach is particularly effective in
AI planning, where the goal is to sequence actions that lead to a desired outcome.
Applications of Heuristic Search
Heuristic search techniques find application in a wide range of problem-solving scenarios,
including:
1. Path finding: Discovery, of the shortest distance that can be found from the start point
to the destination at the point of coordinates or graph.
2. Optimization: Solving the problem of the optimal distribution of resources, planning or
posting to achieve maximum results.
3. Game Playing: The agency of AI with some board games, e.g., chess or Go, is on giving
guidance and making strategy-based decisions to the agents.
4. Robotics: Scheduling robots` location and movement to guide carefully expeditions and
perform given tasks with high efficiency.
5. Natural Language Processing: Language processing tasks involving search algorithms,
such as parsing or semantic analysis, should be outlined. That means.
Best First Search

Best First Search is a heuristic search algorithm that explores a graph by expanding the most
promising node first, according to a specified evaluation function. It continuously selects
nodes based on their estimated cost to reach the goal, typically using a priority queue. BFS is
suitable for finding paths in graphs and optimization problems where informed decision-
making is crucial.

Here's a concise explanation of the Best First Search algorithm in step-by-step form:

 Start with an initial node and add it to the open list.


 While the open list is not empty.
o Select the node with the lowest estimated cost (based on a heuristic function)
from the open list.
o If the selected node is the goal, return the solution.
o Otherwise, expand the selected node by generating its successors.
o Evaluate each successor node using the heuristic function and add them to the
open list.
 If the open list becomes empty without reaching the goal, the search fails.

BFS uses the concept of a Priority queue and heuristic search. To search the graph space, the
BFS method uses two lists for tracking the traversal. An ‘Open’ list that keeps track of the
current ‘immediate’ nodes available for traversal and a ‘CLOSED’ list that keeps track of the
nodes already traversed.
Best First Search problem:

Hill climbing

Hill climbing in AI is employed to solve various optimization problems. It begins with an


initial solution and iteratively explores neighboring solutions by making incremental changes.
If a neighboring solution provides a better value of the objective function, it is accepted as the
current solution. This process continues until no further improvements can be made, or a
stopping criterion is met.

Advantages of Hill Climbing Algorithm

1. Simplicity: Hill climbing is straightforward to understand and implement, making it


an excellent choice for solving basic optimization problems.
2. Memory Efficiency: It doesn't require significant memory resources since it only
stores the current state and explores nearby solutions.
3. Quick Convergence: Hill climbing often converges rapidly to a local maximum,
making it suitable for problems where finding a good solution quickly is more critical
than finding the global optimum.

Disadvantages of Hill Climbing Algorithm

1. Local Optima: Hill climbing is prone to getting stuck in local optima, especially in
complex, multi-modal landscapes where multiple peaks exist.
2. Lack of Exploration: It can be myopic, as it only explores nearby solutions. It may
miss a globally optimal solution if it requires moving away from the current position
initially.
3. Sensitivity to Initial State: The quality of the solution found is highly dependent on
the initial state, making it sensitive to the starting point.

Types of Hill Climbing

Hill climbing comes in several flavors, each with its own approach to finding the optimal
solution:

1. Simple Hill Climbing

Simple Hill Climbing is the most basic form of the algorithm. It iteratively makes small
moves to neighboring solutions, accepting a new solution only if it improves the objective
function. If it doesn't, the algorithm terminates. This method is limited by its tendency to get
stuck in local optima.

While Simple Hill Climbing has advantages such as simplicity and memory efficiency, it also
has limitations. It can get stuck in local optima, miss global optima, and its performance is
sensitive to the initial solution. To address these limitations, more advanced variants and
enhancements of hill climbing algorithms have been developed, such as steepest-ascent hill
climbing, simulated annealing, and genetic algorithms, which offer improved exploration of
the solution space and better convergence properties.

2. Steepest-Ascent Hill Climbing

Steepest-Ascent Hill Climbing, also known as steepest ascent or gradient ascent, takes a more
rigorous approach. It explores all neighboring solutions and selects the one that offers the
most significant improvement in the objective function. This helps mitigate the problem of
getting stuck in local optima to some extent.

Steepest-Ascent Hill Climbing is called "steepest" because it rigorously explores all possible
moves before deciding on the best one. This approach helps mitigate the problem of getting
stuck in local optima, as it can escape from a local peak by considering all available options.

However, Steepest-Ascent Hill Climbing does come with a trade-off – it can be


computationally more demanding than the basic hill climbing algorithm since it needs to
evaluate the objective function for all neighboring solutions. Despite this, it's a valuable
variant when the goal is to improve the ability to find better solutions in optimization
problems.

3. Stochastic Hill Climbing

Stochastic Hill Climbing introduces an element of randomness. Instead of always selecting


the best neighboring solution, it probabilistically accepts solutions that are worse than the
current one. This randomness allows the algorithm to explore a broader solution space,
potentially escaping local optima.

Stochastic Hill Climbing's stochastic nature allows it to explore a wider range of solutions,
potentially escaping local optima that deterministic hill climbing algorithms might get
trapped in. It is particularly useful in situations where a more exploratory approach is desired,
and it's not necessary to find the absolute best solution but rather a satisfactory one.
However, the randomness in Stochastic Hill Climbing can also be a drawback because it may
lead the algorithm to accept inferior solutions more frequently, making it less efficient in
converging to optimal or near-optimal solutions. To mitigate this, variants of Stochastic Hill
Climbing often include strategies for controlling the degree of randomness, such as simulated
annealing.

State-space Diagram for Hill Climbing

In the context of the Hill Climbing algorithm, the state space diagram is a visual
representation of the landscape of possible solutions for an optimization problem. It helps us
understand the different regions within this landscape.

To visualize how the hill climbing algorithm works, we can represent it using a state-space
diagram. In this diagram:

 Each point represents a state or a potential solution.


 Arrows between points represent transitions between states.
 The height of each point indicates the value of the objective function.

The algorithm starts at an initial state and follows arrows to explore neighboring states in
search of the highest peak.

The X-axis represents the potential states or configurations that our algorithm can attain.
Meanwhile, the Y-axis represents the respective values of the objective function associated
with each state.

Different Regions in the State Space Diagram

Here's an in-depth explanation of the various regions in the state space diagram:

1. Local Optima: Local optima are points in the state space where the objective
function has a high value (in maximization problems) or a low value (in minimization
problems), and these points are surrounded by solutions with lower in maximization
or higher in minimization values of the objective function.

2. Plateaus: Plateaus are flat regions in the state space where the objective function
remains constant over a range of solutions. In other words, many neighboring
solutions have nearly identical objective function values.

3. Ridges: Ridges are narrow, elevated paths in the state space diagram that lead to
higher regions. These paths typically have a higher objective function value than the
surrounding solutions.

4. Global Optimum: The global optimum is the highest point (in maximization
problems) or the lowest point (in minimization problems) in the state space,
representing the best possible solution for the optimization problem.

Problems in Different Regions in Hill Climbing Algorithm

 The most common issue in Hill Climbing, is getting stuck in local optima. In a local
optimum, the algorithm perceives no direction of improvement, as all neighboring
solutions have worse objective function values. It terminates, assuming it has found
the best solution.
 Plateaus are flat regions in the solution space where the objective function remains
constant over a range of solutions. Hill Climbing struggles on plateaus because it can't
discern the direction of improvement.
 Ridges are elevated paths in the solution space that may not lead directly to a peak but
offer better objective function values than the surrounding solutions. Hill Climbing
can oscillate along ridges without significant progress.

Applications of Hill Climbing Algorithm

The hill climbing algorithm finds applications in various domains within artificial
intelligence and optimization:

1. Machine Learning: Hill climbing can be used for hyperparameter tuning in machine
learning algorithms, finding the best combination of hyperparameters for a model.
2. Robotics: In robotics, hill climbing can help robots navigate through physical
environments, adjusting their paths to reach a destination.
3. Network Design: It can be used to optimize network topologies and configurations in
telecommunications and computer networks.
4. Game Playing: In game playing AI, hill climbing can be employed to develop
strategies that maximize game scores.
5. Natural Language Processing: It can optimize algorithms for tasks like text
summarization, machine translation, and speech recognition.
Local Maxima
Solution Space search

SAT (Satisfiability) problem is a well-known problem in Artificial Intelligence and


theoretical computer science, particularly in the context of search problems within solution
spaces. It is a decision problem that asks whether there exists an interpretation that satisfies a
given Boolean formula. The SAT problem is central in propositional logic and is the first
known NP-complete problem, meaning that if we can solve SAT efficiently, we can solve all
problems in NP efficiently.

SAT Problem Overview:

In the SAT problem, the objective is to determine whether there is an assignment of truth
values (true or false) to variables that make the Boolean formula true.

Solution Space Search for SAT Problem

In AI, finding a solution to the SAT problem is often framed as a search in the solution
space. The solution space is made up of all possible assignments of truth values to the
Boolean variables. For a problem with nnn variables, the solution space has 2n2^n2n possible
assignments.

1. Search Techniques for SAT

 Brute Force Search: This involves checking all possible combinations of truth
assignments for the variables. This approach becomes impractical for large instances,
as the number of possible combinations grows exponentially.
 Backtracking Search: A more efficient method, backtracking incrementally builds
solutions, checking constraints at each step. If an assignment of a variable violates a
clause in the formula, the algorithm "backtracks" to the previous step and tries a
different value.
variable neighbourhood descent
Variable Neighbourhood Descent (VND) is a local search algorithm used in optimization
and artificial intelligence (AI) that systematically changes the neighborhood structure during
the search process to avoid getting trapped in local optima. It is a deterministic variant of
Variable Neighbourhood Search (VNS), which explores different neighborhoods to
enhance the search for a global optimum in a given solution space.

VND Works:

VND starts with an initial solution and then applies local search within different
neighborhoods. If an improvement is found in a neighborhood, it resets the search to the first
neighborhood and continues. If no improvement is found, it switches to the next
neighborhood. The process continues until no better solution can be found across all
neighborhoods.
Algorithm of Variable neighbourhood Descent

Beam Search

Beam search is a heuristic search algorithm that explores a graph by expanding the most
optimistic node in a limited set. Beam search is an optimization of best-first search that
reduces its memory requirements.

Beam search uses breadth-first search to build its search tree. At each level of the tree, it
generates all successors of the states at the current level, sorting them in increasing order of
heuristic cost. However, it only stores a predetermined number (β), of best states at each
level called the beamwidth. Only those states are expanded next. 4M

For example, let's take the value of ß = 2 for the tree shown below. So, follow the following
steps to find the goal node.

How Beam Search Works:

Beam search operates by maintaining a fixed number of best nodes (partial solutions) at each
level of the search tree. It expands only the most promising candidates at each step and
discards the rest, making it more efficient in practice than BFS or exhaustive search.
Characteristics of Beam Search

 Width of the Beam (W): This parameter defines the number of nodes considered at each
level. The beam width W directly influences the number of nodes evaluated and hence
the breadth of the search.
 Branching Factor (B): If B is the branching factor, the algorithm evaluates W×B nodes
at every depth but selects only W for further expansion.
 Completeness and Optimality: The restrictive nature of beam search, due to a limited
beam width, can compromise its ability to find the best solution as it may prune
potentially optimal paths.
 Memory Efficiency: The beam width bounds the memory required for the search,
making beam search suitable for resource-constrained environments.

How Beam Search Works?

The process of Beam Search can be broken down into several steps:
1. Initialization: Start with the root node and generate its successors.
2. Node Expansion: From the current nodes, generate successors and apply the heuristic
function to evaluate them.
3. Selection: Select the top W nodes according to the heuristic values. These selected nodes
form the next level to explore.
4. Iteration: Repeat the process of expansion and selection for the new level of nodes until
the goal is reached or a certain condition is met (like a maximum number of levels).
5. Termination: The search stops when the goal is found or when no more nodes are
available to expand.

Tabu Search
Tabu search is augment exploitative stratergy of heuristic search with an explorative tendency
that looks for new areas in the search space..
The search follows the diktat of heuristic function as long as better .but there are no better
choices instead of termination local search ,it gives explorative tendency to continue search.
Tabu search modifies termination criteria. the algorithm dosnt terminate on reaching
maximum but continue searching beyond until other criteria is met.
One way getting most out of search would be keept rack of best solution found.. TS is guided
by heuristic function. even if go beyond local maximum. one way to search away from
maxima is keep finite closed list which nodes are recent nodes are stored. such a closed list
could be implemented as circular que of k elements where last k node are stored.
UNIT III

Iterated Hill Climbing

Iterated Hill Climbing is an extension of the basic hill climbing algorithm, a local search
method used in optimization problems and artificial intelligence (AI). Unlike basic hill
climbing, which can easily get stuck in local optima, iterated hill climbing repeatedly applies
the hill climbing process from different starting points to find better solutions and overcome
the limitations of local search.

To overcome the problem of getting stuck in local optima, iterated hill climbing restarts the
hill climbing process from different initial random states. The idea is that each restart gives
the algorithm a new chance to explore a different part of the solution space, increasing the
chances of finding the global optimum.

Steps:

1. Initial Random Solution: Choose an initial random solution from the solution space.
2. Local Search (Hill Climbing): Perform hill climbing on this solution until you reach
a local optimum.
3. Restart: Once you reach a local optimum, generate a new random initial solution and
restart the hill climbing process.
4. Best Solution Tracking: Keep track of the best solution found during each hill
climbing iteration.
5. Repeat: Repeat the process a predefined number of times or until a time/iteration
limit is reached.
Simulated Annealing
Simulated annealing is a stochastic global search optimization algorithm and it modified
version of stochastic hill climbing.
This algorithm appropriate for nonlinear objective functions, where other local search
algorithm do not operate well
The simulated annealing solution is to start by high temperature and then gradually reduce the
intensity of lower temperature.
Simulated annealing is very useful for situations where there are lot of local minima.
Simulated annealing improves this stratergy through the introduction of two tricks.
This algorithm picks a random move instead of picking the best move.
If the move improves result then it accepts this random move otherwise it accepts the move
with some probability less than 1.

Genetic Algorithm

Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger
part of evolutionary algorithms. Genetic algorithms are based on the ideas of natural
selection and genetics. These are intelligent exploitation of random searches provided with
historical data to direct the search into the region of better performance in solution
space. They are commonly used to generate high-quality solutions for optimization
problems and search problems.

Genetic algorithms simulate the process of natural selection which means those species
that can adapt to changes in their environment can survive and reproduce and go to the next
generation. In simple words, they simulate “survival of the fittest” among individuals of
consecutive generations to solve a problem. Each generation consists of a population of
individuals and each individual represents a point in search space and possible solution.
Each individual is represented as a string of character/integer/float/bits. This string is
analogous to the Chromosome.

Search space
The populations of individuals are maintained within search space. Each individual
represents a solution in search space for given problem. Each individual is coded as a finite
length vector (analogous to chromosome) of components. These variable components are
analogous to Genes. Thus a chromosome (individual) is composed of several genes
(variable components).

Fitness Score
A Fitness Score is given to each individual which shows the ability of an individual to
“compete”. The individual having optimal fitness score (or near optimal) are sought.

The GAs maintains the population of n individuals (chromosome/solutions) along with their
fitness scores.The individuals having better fitness scores are given more chance to
reproduce than others. The individuals with better fitness scores are selected who mate and
produce better offspring by combining chromosomes of parents. The population size is
static so the room has to be created for new arrivals. So, some individuals die and get
replaced by new arrivals eventually creating new generation when all the mating
opportunity of the old population is exhausted. It is hoped that over successive generations
better solutions will arrive while least fit die.

Each new generation has on average more “better genes” than the individual (solution) of
previous generations. Thus each new generations have better “partial solutions” than
previous generations. Once the offspring produced having no significant difference from
offspring produced by previous populations, the population is converged. The algorithm is
said to be converged to a set of solutions for the problem.

Operators of Genetic Algorithms

Once the initial generation is created, the algorithm evolves the generation using following
operators –
1) Selection Operator: The idea is to give preference to the individuals with good fitness
scores and allow them to pass their genes to successive generations.

2) Crossover Operator: This represents mating between individuals. Two individuals are
selected using selection operator and crossover sites are chosen randomly. Then the genes
at these crossover sites are exchanged thus creating a completely new individual
(offspring).

For example –

3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the
diversity in the population to avoid premature convergence. For example –

Algorithm can be summarized as –

1) Randomly initialize populations p


2) Determine fitness of population
3) Until convergence repeat:
a) Select parents from population
b) Crossover and generate new population
c) Perform mutation on new population
d) Calculate fitness for new population
Ant Colony Optimization:
Ant Colony Optimization technique is purely inspired from the foraging behaviour of ant
colonies, first introduced by Marco Dorigo in the 1990s. Ants are eusocial insects that
prefer community survival and sustaining rather than as individual species. They
communicate with each other using sound, touch and pheromone. Pheromones are
organic chemical compounds secreted by the ants that trigger a social response in
members of same species. These are chemicals capable of acting like hormones outside
the body of the secreting individual, to impact the behaviour of the receiving individuals.
Since most ants live on the ground, they use the soil surface to leave pheromone trails that
may be followed (smelled) by other ants.
Procedure ACO_MetaHeuristic is 6M(3+3)
while not terminated do
generateSolutions()
daemonActions()
pheromoneUpdate()
repeat
end procedure
.Neural Networks
Neural networks are a type of machine learning model inspired by the human brain's
structure and function. They consist of layers of interconnected nodes (neurons) that process
and transform input data to produce an output. Here's a brief overview:
Structure of Neural Networks
Neurons
Input Neurons: Receive input data.
Hidden Neurons: Intermediate layers that transform data.
Output Neurons: Produce the final output.
Layers
Input Layer: Contains input neurons that receive data.
Hidden Layers: One or more layers where data transformation occurs.
Output Layer: Contains output neurons that produce the result.
Connections
Weights: Values that adjust the strength of the connections between neurons.
Biases: Additional parameters in each neuron that help the model make more accurate
predictions.
Types of Neural Networks
Feedforward Neural Networks (FNN)
Data moves in one direction, from input to output.
Used for tasks like image and speech recognition.
Convolutional Neural Networks (CNN)
Specialized for processing grid-like data such as images.
Use convolutional layers to detect features.

12M

Recurrent Neural Networks (RNN)


Designed for sequential data like time series or natural language.
Neurons can retain information from previous steps (memory).
Long Short-Term Memory Networks (LSTM)
A type of RNN that can capture long-term dependencies.
Overcomes the vanishing gradient problem in traditional RNNs.
Training Neural Networks
Forward Propagation
Data passes through the network from input to output.
Predictions are made based on current weights and biases.
Loss Function
Measures the difference between the predicted output and the actual output.
Common loss functions include Mean Squared Error (MSE) and Cross-Entropy Loss.
Backpropagation
Algorithm used to minimize the loss function.
Adjusts weights and biases based on the error gradient.
Optimization
Gradient Descent: Algorithm to minimize the loss function.
Learning Rate: Step size for each iteration of gradient descent.
Momentum: Helps accelerate gradient descent by considering previous gradients.
Applications
Image Recognition: Identifying objects, faces, and scenes.
Natural Language Processing (NLP): Language translation, sentiment analysis,
chatbots.
Time Series Prediction: Stock market analysis, weather forecasting.
Recommender Systems: Suggesting products or content based on user behavior.
Neural networks are powerful tools for a variety of tasks, but they also require significant
computational resources and careful tuning of parameters to perform well.
UNIT IV
Brute Force
A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem. The brute force algorithm tries out all the
possibilities till a satisfactory solution is not found.

Such an algorithm can be of two types:

o Optimizing: In this case, the best solution is found. To find the best solution, it may
either find all the possible solutions to find the best solution or if the value of the best
solution is known, it stops finding when the best solution is found. For example:
Finding the best path for the travelling salesman problem. Here best path means that
travelling all the cities and the cost of travelling should be minimum.
o Satisficing: It stops finding the solution as soon as the satisfactory solution is found.
Or example, finding the travelling salesman path which is within 10% of optimal.
o Often Brute force algorithms require exponential time. Various heuristics and
optimization can be used:
o Heuristic: A rule of thumb that helps you to decide which possibilities we should
look at first.
o Optimization: A certain possibility are eliminated without exploring all of them.
Let's understand the brute force search through an example.

Suppose we have converted the problem in the form of the tree shown as below:

Brute force search considers each and every state of a tree, and the state is represented in the
form of a node. As far as the starting position is concerned, we have two choices, i.e., A state
and B state. We can either generate state A or state B. In the case of B state, we have two
states, i.e., state E and F.
In the case of brute force search, each state is considered one by one. As we can observe in
the above tree that the brute force search takes 12 steps to find the solution.

On the other hand, backtracking, which uses Depth-First search, considers the below states
only when the state provides a feasible solution. Consider the above tree, start from the root
node, then move to node A and then node C. If node C does not provide the feasible solution,
then there is no point in considering the states G and H. We backtrack from node C to node A.
Then, we move from node A to node D. Since node D does not provide the feasible solution,
we discard this state and backtrack from node D to node A.

We move to node B, then we move from node B to node E. We move from node E to node K;
Since k is a solution, so it takes 10 steps to find the solution. In this way, we eliminate a
greater number of states in a single iteration. Therefore, we can say that backtracking is faster
and more efficient than the brute force approach

Branch and Bound


Branch and bound is one of the techniques used for problem solving. It is similar to the
backtracking since it also uses the state space tree. It is used for solving the optimization
problems and minimization problems. If we have given a maximization problem then we can
convert it using the Branch and bound technique by simply converting the problem into a
maximization problem.
Refinement Search
The branch and bound technique applies to solution search as well. When it is applied to
solution search, it is called refinement search. Let us see how it works. The method is same as
we have seen in the previous segment but now we are applying it at the complete solutions.

Suppose if we try to apply the branch and bound technique to the TSP. Now let us assume
that we have seven cities as shown in the module no.8 and different paths between them are
available like in module 8.

One typical way to use refinement search for TSP is to pick up a city as a root node and
have all other cities connected as children. Thus each child represent a city other than the
root. The first level of tree depicts first move from starting city to all other cities
From those cities, we have second level of the tree, they have similar children indicating
cities other than the parent and itself. This second level describes second leg of the tour. We
can complete the entire tree like this. The tree is as long as the tours and thus the maximum
number of levels a tree can have is the length of the tour. In other words the tree describes
each possible tour begins from the city described as the root node. This tree, in a way,
describes a solution space for all solutions begin from root. We now need to search in this
solution space and get the cheapest path.

For solving the complete TSP problem, we need to have as many trees as the number of
cities. For example if we have seven cities A,B,C,D,E,F,G then we will have seven trees, one
tree with root node as A, another with root node as B and so on. Each tree contains other
cities as nodes at the second level. Each branch starting from the root node indicates the tour.
The edges of the tree is weighted and the number indicates the distance between the cities.
These seven trees describe the complete solution space for the TSP we have described. Let us
see how we search through it to find out the cheapest solution. We will show how we can get
cheapest path for one tree. We can do the same thing for all other trees. Now we can compare
all solutions that we have got so far and select the cheapest from the lot.

Look at figures 12.2 and table 12.1. The distances are depicted as weights of the arcs in the
graph in figure 12.23. Suppose now we assume one of the city as a root and draw a tree with
every city being a children. In case if we decide the root to be A, figure 12.3 showcases how a
tree indicating partial tours can be drawn. We can easily see that even with this trivial case
with seven cities, number of combinations are overwhelming.

At any given point of time, the tour cost so far can be calculated. For example, according
to the figure 12.3, the cost of the tour A-B-C-D is so far= 220+250+150 = 620. If we want to
estimate the total distance of tour A-B-C-D-E-F-G, the cost of remaining part may be
estimated using some assumption. One method to do so is to find out two shortest distances
in a given row for a city yet to be visited and divide it by two. For example if we pick up city
E, two of the shortest distances are 150 and 250 which sums up to 400 dividing by two yields
200. We are assuming, by picking up two shortest values, that the city is connected with
others using nearest neighbours, something we have already explored in module 8. Based on
these estimated values, all tours can be measured as summation of two components, one,
which is already calculated, and other, which is estimated. We can pick up the shortest tour
and when revise the estimate by replacing the estimate of the next level by a correct value.
We may need to refine estimates of each tour connected with that edge. For example if now
we have a revised distance for D-E, we may need to change estimates of both tours A-B-C-D-
E-F-G, A-B-C-D-E-G-F. Similarly, on the other hand, if currently B-D sound least cost, and
out estimation is 200 +150 /2 =175. Now we get the actual value to be 150, we will have to
revise estimation of all tours which starts with A-B-D. This might continue till we get a
complete tour with lowest actual value.
One may argue that we can calculate all tour correct values rather than taking estimates. In
our case of 7 cities, it is actually possible and we do not need to go for branch and bound.
Unfortunately it is not so for a real case of larger number of cities. We cannot have all values
calculated beforehand. We need to proceed with exploring children in this fashion.

Also, we are exploring the cases where the origin of the tour is A, what we will get at the
end is the shortest tour starting from A. We must take all possible cities as root nodes and
apply refinement search on them to find out shortest tours starting from other cities like tours
starting from B, tours starting from C and so on. At the end, we have to pick up the best tour
based on shortest tours that we found originating from all other cities.

This branch and bound helps us picking up shorter paths and avoid longer path, without
using any heuristic. This search is basically a blind search but little better than breadth first
search as we estimate and try going in right direction rather than looking for all options.
Does branch and bound possible to be used where one can use Breadth First? Not
advisable in all cases. Let us take the figure 12.1. We want to find shortest path between two
cities, say A and E. If we use refinement search, the algorithm tend to travel to cities which
are nearer to the originating city, i.e. A. Assume F is nearest to A, the search travels in that
direction, next may be F to D which is shortest and so on. Eventually the search algorithm
realizes that this is a bad path and take the next nearest neighbour but it wastes lot of time if
the cities are situated in a way that there are many nearer cities to the originating city and the
destination city is quite far from all of them. The refinement search tries to look for shortest
path without knowing anything about the direction of the destination city. That is the
problem. A breadth first search probably yields answer faster.

Dijstra’s Algorithm
Dijkstra’s algorithm is a popular algorithms for solving many single-source shortest path
problems having non-
negative edge weight in the graphs i.e., it is to find the shortest distance between two vertices
on a graph. It was
conceived by Dutch computer scientist Edsger W. Dijkstra in 1956.
Algorithm for Dijkstra’s Algorithm:
Mark the source node with a current distance of 0 and the rest with infinity.
Set the non-visited node with the smallest current distance as the current node.
For each neighbor, N of the current node adds the current distance of the adjacent node with
the weight of the
edge connecting 0->1. If it is smaller than the current distance of Node, set it as the new
current distance of N.
Mark the current node 1 as visited.
Go to step 2 if there are any nodes are unvisited.
Dijkstra’s Algorithm will generate the shortest path from Node 0 to all other Nodes in the
graph.
Consider the below graph:

Initially we have a set of resources given below :


• The Distance from the source node to itself is 0. In this example the source node is 0.
• The distance from the source node to all other node is unknown so we mark all of them as
infinity.
Example: 0 -> 0, 1-> ∞,2-> ∞,3-> ∞,4-> ∞,5-> ∞,6-> ∞.
• we’ll also have an array of unvisited elements that will keep track of unvisited or unmarked
Nodes.
• Algorithm will complete when all the nodes marked as visited and the distance between
them added to the
path. Unvisited Nodes:- 0 1 2 3 4 5 6.
Step 1: Start from Node 0 and mark Node as visited as you can check in below image visited
Node is marked
red.
Algorithm A*
A* (pronounced "A-star") is a powerful graph traversal and pathfinding algorithm widely
used in artificial intelligence and computer science. It is mainly used to find the shortest path
between two nodes in a graph, given the estimated cost of getting from the current node to the
destination node. The main advantage of the algorithm is its ability to provide an optimal path
by exploring the graph in a more informed way compared to traditional search algorithms
such as Dijkstra's algorithm.

Algorithm A* combines the advantages of two other search algorithms: Dijkstra's algorithm
and Greedy Best-First Search. Like Dijkstra's algorithm, A* ensures that the path found is as
short as possible but does so more efficiently by directing its search through a heuristic
similar to Greedy Best-First Search. A heuristic function, denoted h(n), estimates the cost of
getting from any given node n to the destination node.

The main idea of A* is to evaluate each node based on two parameters:

1. g(n): the actual cost to get from the initial node to node n. It represents the sum of the
costs of node n outgoing edges.
2. h(n): Heuristic cost (also known as "estimation cost") from node n to destination node
n. This problem-specific heuristic function must be acceptable, meaning it never
overestimates the actual cost of achieving the goal. The evaluation function of node n
is defined as f(n) = g(n) h(n).

Algorithm A* selects the nodes to be explored based on the lowest value of f(n), preferring
the nodes with the lowest estimated total cost to reach the goal. The A* algorithm works:

1. Create an open list of found but not explored nodes.


2. Create a closed list to hold already explored nodes.
3. Add a starting node to the open list with an initial value of g
4. Repeat the following steps until the open list is empty or you reach the target node:
1. Find the node with the smallest f-value (i.e., the node with the minor g(n)
h(n)) in the open list.
2. Move the selected node from the open list to the closed list.
3. Create all valid descend ants of the selected node.
4. For each successor, calculate its g-value as the sum of the current node's g
value and the cost of moving from the current node to the successor node.
Update the g-value of the tracker when a better path is found.
5. If the follower is not in the open list, add it with the calculated g-value and
calculate its h-value. If it is already in the open list, update its g value if the
new path is better.
6. Repeat the cycle. Algorithm A* terminates when the target node is reached or
when the open list empties, indicating no paths from the start node to the
target node. The A* search algorithm is widely used in various fields such as
robotics, video games, network routing, and design problems because it is
efficient and can find optimal paths in graphs or networks.
Consider the below search problem, and we will traverse it using greedy best-first search. At
each iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in
the below table.

Expand the nodes of S and put in the CLOSED list


Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B] : Open [E, A], Closed [S, B, F]
Admisibility of A*
The cost of reaching the goal state is assessed using an admissible heuristic in an informed
search algorithm, however, if we need to discover a solution to the problem, the estimated
cost must be lower than or equal to the true cost of reaching the goal state. The algorithm
employs the allowable heuristic to determine the best-estimated route from the current node
to the objective state.
The evaluation function in A* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
n = current node.
f(n) = evaluation function.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.
Conditions:
h(n) ≤ h*(n) ∴ Underestimation
h(n) ≥ h*(n) ∴ Overestimation

Key Points:

 The heuristic function h(n) is admissible if h(n) is never larger than h*(n) or if h(n) is
always less or equal to the true value.
 If A* employs an admissible heuristic and h(goal)=0, then we can argue that A* is
admissible.
 If the heuristic function is constantly tuned to be low with respect to the true cost,
i.e. h(n) ≤ h*(n), then you are going to get an optimal solution of 100%

Real-Life Examples:

Case 1:
Let’s suppose, you are going to purchase shoes and shoes have a price of $1000. Before
making the purchase, you estimated that the shoes will be worth $1200, When you went to
the store, the shopkeeper informed you that the shoe’s actual price was $1, 000, which is
less than your estimated value, indicating that you had overestimated their value by $200.
so this is the case of Overestimation.
1200 > 1000
i.e. h(n) ≥ h*(n) ∴ Overestimation
Case 2:
Similar to the last situation, you are planning to buy a pair of shoes. This time, you estimate
the shoe value to be $800. However, when you arrive at the store, the shopkeeper informs
you that the shoes’ true price is $1000, which is higher than your estimate.indicating that
you had understimated their value by $200. In this situation, Underestimation has
occurred.
800 < 1000
i.e. h(n) ≤ h*(n) ∴ Underestimation
Example:

Graph

where X is the start node and Y is the goal node in between these two nodes there are
intermediate nodes A, B and all values which are in the above diagram are actual cost
values means h*(n)
Case 1: Overestimation
Let’s suppose,
H(A)= 60, Estimated values i.e. h(n)]
H(B)= 50
So, using A* equation, f(n) = G(n) + h(n)
by putting values
f(A) = 100 + 60 = 160
f(B) = 100 + 50 = 150
by comparing f(A) & f(B), f(A) > f(B) so choose path of B node and apply A* equation
again
f(Y) = g(Y) + h(Y) [here h(Y) is 0, Goal state]
= 140 + 0 [g(Y)=100+40=140 this is actual cost i.e. h*(B)]
= 140
The least cost to get from X to Y, as shown in the mentioned graph is 130, however, in
Case 1, we took into consideration the expected costs h(n) of A & B, which were 60 & 50,
respectively. As a result,
140 > 130 according to the Overestimation condition h(n) ≥ h*(n), and here, since the
value of node f(A) is bigger than f(Y), we are unable to proceed along a different path
which is from node A.
Case 2: Underestimation
Let’s suppose,
H(A) = 20 [This are estimated values i.e. h(n)]
H(B) = 10
So, using A* equation, f(n) = G(n) + h(n)
by putting values
f(A) = 100 + 20 = 120
f(B) = 100 + 10 = 110, by comparing f(A) & f(B), f(A) > f(B), so choose path of B node and
apply A* equation again
f(Y) = g(Y) + h(Y) [here h(Y) is 0, because it is goal state]
= 140 + 0 [g(Y) = 100 + 40 = 140 this is actual cost i.e. h*(B)]
= 140
Now, notice that f(Y) is the same in both circumstances but, in 2nd case by comparing
the f(A) with f(Y) i.e. 120 < 140 as it means we can go from node A. Therefore A* will go
with f(A), which has a value of 120.
f(Y) = g(Y) + h(Y)
= 130 + 0 [g(Y) = 100 + 30 = 130 this is a= 130 + 0 [g(Y) = 100 + 30 = 130 this is
actual cost of f(A)]
= 130ctual cost of f(A)]
= 130
So, based on all of these calculations, we can say that this is the optimal value.

Iterative Depening A*
IDA* is a variant of depth-first search (DFS) that iteratively deepens its search by
incrementing the cost threshold, which controls the depth of the exploration. Unlike A*,
which explores all possible nodes within a threshold, IDA* uses a heuristic function to
evaluate and prioritize the most promising nodes. This allows it to prune less promising
paths, reducing memory usage while ensuring that the search focuses on optimal routes.
Step-by-Step Process of the IDA* Algorithm
1. Initialization: Set the root node as the current node and compute its f-score.
2. Set Threshold: Initialize a threshold based on the f-score of the starting node.
3. Node Expansion: Expand the current node’s children and calculate their f-scores.
4. Pruning: If the f-score exceeds the threshold, prune the node and store it for future
exploration.
5. Path Return: Once the goal node is found, return the path from the start node to the
goal.
6. Update Threshold: If the goal is not found, increase the threshold based on the
minimum pruned value and repeat the process.

Example of IDA* Algorithm

In the below tree, the f score is written inside the nodes means the f score is already
computed and the start node is 2 whereas the goal node is 15. the explored node is colored
green color.
So now we have to go to a given goal by using IDA* algorithm.
Iteration 1

Iteration 1

 Root node as current node i.e 2


 Threshold = current node value (2=2). So explore its children.
 4 > Threshold & 5>Threshold. So, this iteration is over and the pruned values are 4, and
5.
Iteration 2

Iteration 2

 In pruned values, the least is 4, So threshold = 4


 current node = 2 and 2< threshold, So explore its children. i.e two children explore one
by one
 So, first children 4, So, set current node = 4 i.e equal to the threshold, so, explored its
children also i.e 5, 4 having 5> threshold so, pruned it and explore second child of node
4 i.e 4, so set current node = 4 = threshold, and explore its children i.e 8 & 7 having
both 8 & 7 > threshold so, pruned it. At the end of this, our pruned value is 5,8,7
 Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e
5>threshold, So pruned it.
 So, our pruned value is 5,8,7.
Iteration 3

Iteration 3

 In pruned values, the least is 5, So threshold = 5


 current node = root node = 2 and 2< threshold, So explore its children. i.e two children
explore one by one
 So, first children 4, So, set current node = 4 < threshold, so, explored its children also
i.e 5, 4 having 5= threshold so explore its child also 7&8 > threshold. So, pruned it and
explore the second child of node 4 i.e 4, so set current node = 4 < threshold, and explore
its children i.e 8 & 7 here, both 8 & 7 > threshold so, pruned it. At the end of this, our
pruned value is 7 & 8
 Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5 =
threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 > threshold. So pruned
it
 So, our pruned value is 7,8 & 6

Iteration 4

 In pruned values, the least value is 6, So threshold = 6


 current node = root node = 2 and 2< threshold, So explore its children. i.e two children
explore one by one
 So, the first child is 4, So, set current node = 4 < threshold, so, explored its children also
i.e 5, 4 having 5< threshold so explore its child also 7&8 > threshold. So, pruned it and
explore the second child of node 4 i.e 4, so set current node = 4 < threshold, and explore
its children i.e 8 & 7 here, both 8 & 7 > threshold so, pruned it. At the end of this, our
pruned value is 7 & 8
 Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5 =
threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 = threshold, So,
explore one by one,
 The first 6 has two children i.e 6 & 8, having 6 = threshold. So, explore its child also
i.e 13 & 7. here both 13 & 7 > Threshold. So, pruned it. next is 8 > Threshold. pruned it,
So, pruned value at this stage is 13,7 & 8.
 Explore the second child of 5 i.e 6 = Threshold. So, explore its child i.e 7 & 9. Both are
greater than Threshold. So, pruned it
 So, our pruned values are 13,7,8 & 9.

Iteration 5

 In pruned values, the least value is 7, So threshold = 7


 current node = root node = 2 and 2< threshold, So explore its children. i.e two children
explore one by one
 So, the first child is 4, So, set current node = 4 < threshold, so, explored its children also
i.e 5, 4
 The first child of 4 is 5 i.e 5< threshold so explore its child also 7&8, Here 7 =
threshold. So, explore its children i.e 12 & 14, both > Threshold. So, pruned it. And the
second child of 5 is 8 > Threshold, So, pruned it. At this stage, our pruned value is 12,
14 & 7.
 Now explore the second child of node 4 i.e 4, so set current node = 4 < threshold, and
explore its children i.e 8 & 7 here, 8 > threshold so, pruned it. then go to the second
child i.e 7 = Threshold, So explore its children i.e 13 & 8. having both > Threshold. So
pruned it. At the end of this, our pruned value is 12,14,8 & 13
 Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5 <
threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 < threshold, So,
explore one by one,
 The first 6 has two children i.e 6 & 8, having 6 < threshold. So, explore its child also
i.e 13 & 7. here 13 > Threshold. So, pruned it. And 7= Threshold. And it hasn’t any
child. So, the shift to the next sub-child of 6 i.e 8 > threshold, So, pruned it. The pruned
value at this stage is 12,14,8 &13
 Explore the second child of 5 i.e 6 < Threshold. So, explore its child i.e 7 & 9. Here 7 =
Threshold, So, explore its children i.e 8 & 14, Both are greater than Threshold. So,
pruned it, Now the sub child of 6 is 9 > Threshold, So, pruned it.
 So, our pruned values are 12,14,8,13 & 9.

Iteration 6

Iteration 6

 In pruned values, the least value is 8, So threshold = 8


 current node = root node = 2 and 2< threshold, So explore its children. i.e two children
explore one by one
 So, the first child is 4, So, set current node = 4 < threshold, so, explored its children also
i.e 5, 4
 The first child of 4 is 5 i.e 5< threshold so explore its child also 7&8, Here 7 <
threshold. So, explore its children i.e 12 & 14, both > Threshold. So, pruned it. And the
second child of 5 is 8 = Threshold, So, So, explore its children i.e 16 & 15, both >
Threshold. So, pruned it. At this stage, our pruned value is 12, 14, 16 & 15.
 Now explore the second child of node 4 i.e 4, so set current node = 4 < threshold, and
explore its children i.e 8 & 7 here, 8 = Threshold, So, So, explore its children i.e 12 &
9, both > Threshold. So, pruned it. then go to the second child i.e 7 < Threshold, So
explore its children i.e 13 & 8. having 13 > Threshold. So pruned it. and 8 = Threshold
and it hasn’t any child. At the end of this, our pruned values are 12, 14, 16, 15, and 13.
 Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5 <
threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 < threshold, So,
explore one by one,
 The first 6 has two children i.e 6 & 8, having 6 < threshold. So, explore its child also
i.e 13 & 7. here 13 > Threshold. So, pruned it. And 7<Threshold. And it hasn’t any
child. So, the shift to the next sub-child of 6 i.e 8 = threshold. So, explored its children
also i.e 15 & 16, Here 15 = Goal Node. So, stop this iteration. Now no need to explore
more.
 The goal path is 2–>5–>6–>8–>15
Recursive Best First Search
Recursive Best First Search (RBFS) is a variant of the Best First Search algorithm, which is
a greedy search algorithm used for traversing or searching trees or graphs. RBFS is a
memory-efficient version of Best First Search that uses recursion to avoid the need for
maintaining a large priority queue.
Here's a high-level overview of how Recursive Best First Search works:
Initialization: RBFS begins by initializing the search with the initial state and setting a
threshold value, initially set to infinity.
Base Case: If the current node is a goal state, return success.
Expansion: Expand the current node to generate its successors or children.
Evaluation: For each child, evaluate its heuristic value (typically using a heuristic function
that estimates the cost from that state to the goal state).
Recursion: Choose the child with the lowest heuristic value. If its heuristic value exceeds the
threshold, update the threshold to be the minimum of the child's heuristic value and the
threshold. Then, recursively apply RBFS to this child node with the new threshold.
Backtrack: If the recursive call returns a failure (indicating that no solution was found within
the threshold), update the current node's heuristic value to be the minimum of all its children's
heuristic values and return this minimum value up the recursion stack.
Termination: Continue until a goal state is found or until the search exhaustively explores all
states within the current threshold.
RBFS is particularly useful in cases where memory constraints are a concern because it
doesn't need to maintain a large priority quee, unlike traditional Best First Search algorithms.
However, RBFS can potentially revisit states multiple times due to its recursive nature.
Purning close list
Purning Open list

Divide and Conquer Beam Stack Search.


Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of the
small pieces, and then merge the piecewise solutions into a global solution. This mechanism
of solving the problem is called the Divide & Conquer Strategy. Divide and Conquer
algorithm consists of a dispute using the following three steps.
1. Divide the original problem into a set of sub problems.
2. Conquer: Solve every sub problem individually, recursively.
3. Combine: Put together the solutions of the sub problems to get the solution to the whole
problem.
Generally, we can follow the divide-and-conquer approach in a three-step process.
Examples: The specific computer algorithms are based on the Divide & Conquer approach:
Fundamental of
Divide & Conquer Strategy: There are two fundamental of Divide & Conquer Strategy:
1. Relational Formula
2. Stopping Condition
1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula
we apply D&C Strategy, i.e. we break the problem recursively & solve the broken sub
problems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then
we need to know
that for how much time, we need to apply divide & Conquer. So the condition where the need
to stop our recursion steps of D&C is called as Stopping Condition.
UNIT V
SAINT

SAINT (Symbolic Automatic INTegrator):


SAINT was an expert system developed in the 1970s by Joshua Lederberg and his team at
Stanford University.
It was designed to solve symbolic integration problems in calculus, aiming to mimic the
problem-solving capabilities of a human expert in calculus.
One of the first AI systems that used AO graphs was SAINT, Symbolic integration was
designed to solve symbolic problems in Mathematics. It used to solve an integral equation by
searching for transformations and Problem Decomposition.
SAINT operated by using a collection of rules and heuristics to manipulate symbolic
expressions and solve integration problems step by step.
The system could identify patterns in integrals, apply appropriate transformation rules, and
make decisions based on the current state of the problem.
Here's a simplified example of how SAINT might work:
Problem: Solve the integral ∫(x^2 + 2x) dx.
SAINT Steps:
Recognize the integral as a sum of two terms: x^2 and 2x.
Apply the power rule of integration to each term separately.
Integrate x^2 to get (1/3)x^3.
Integrate 2x to get x^2.
Combine the results to get the final solution: (1/3)x^3 + x^2 + C, where C is the constant of
integration.
SAINT demonstrated the potential of expert systems to tackle complex problem-solving tasks
in specific domains, although it had limitations in handling real-world variability and
complexity.
Dendral

Dendral was an early expert system developed in the 1960s at Stanford University by Edward
Feigenbaum, Joshua Lederberg, and their colleagues.
It was designed to solve problems in organic chemistry, particularly in the analysis of mass
spectrometry data to identify the structure of organic molecules.
DENDRAL was built to add up a database of the known rules (valence requirements) and
exceptions in organic chemistry that determine the structures of molecules - the data and
knowledge a human chemist might possess and use. Therefore, by definition, it possesses and
utilizes this human knowledge.

Dendral used a knowledge base containing information about chemical structures, reactions,
and spectral data, along with inference rules to reason about possible molecular structures.
The system employed heuristic search strategies to explore the space of possible molecular
structures and evaluate them against the available spectral data.
Here's a simplified example of how Dendral might work:
Problem: Given a mass spectrum of an unknown compound, determine its molecular
structure.
Dendral Steps:
Analyze the mass spectrum to identify peaks corresponding to molecular fragments.
Generate candidate molecular structures consistent with the observed fragment masses and
their relationships.
Apply chemical knowledge and rules to refine the candidate structures and eliminate
inconsistencies.
Evaluate the remaining candidate structures against additional spectral data to determine the
best match.
Dendral demonstrated the feasibility of using expert systems to perform complex analysis
tasks in scientific domains, paving the way for future developments in artificial intelligence
and expert systems.
Both SAINT and Dendral represent pioneering efforts in the field of expert systems,
showcasing the potential of AI to tackle complex problem-solving tasks in specific domains.

Goal Tree
Since one has to traverse all the edges at an AND node, the solution obtained will not be a
path, but a subtree (or a subgraph) in the AND/OR search space. However, one can still use a
depth first search approach, with the modification that at each AND node, more than one
search will be spawned. In the above example, these will be three searches.
The first will search below the node labelled Outing, the second below the node labelled
Movie, and the third below the node Dinner. But since they will search independently, they
will not explore fruitless combinations as done by our first formulation. In general, of course,
an AND/OR search space will be much larger, with many AND and OR levels, and we will
still need to adopt a heuristic approach
Algorithm:
• Starting at the root, traverse the graph along marked paths till the algorithm reaches a set of
unsolved nodes U.
• Pick a node n from U and refine it.
• Propagate the revised estimate of n up via all ancestors.
• If for a node all AND successors along the marked path are marked
• SOLVED, mark it SOLVED as well.
• If a node has OR edges emanating from it, and the cheapest successor is marked SOLVED
then mark the node SOLVED.
• Terminate when the root node is marked SOLVED.

AO* ALGORITHM

Show the progress of AO* Algorithm on the following Synthetic problem given below.
Consider each edge with a cost of 1, and Nodes are labelled with Heuristic values.
Solved nodes are represented by double lined boxes have cost zero.
Rule based System
Rule-based systems, a foundational technology in artificial intelligence (AI), have long
been instrumental in decision-making and problem-solving across various domains. These
systems operate on a set of predefined rules and logic to make decisions, perform tasks, or
derive conclusions. Despite the rise of more advanced AI methodologies, such as machine
learning and neural networks, rule-based systems remain crucial due to their transparency,
ease of use, and interpretability.

Components of a Rule-Based System

A typical rule-based system comprises several key components:


1. Rules: The core of the system, these are conditional statements that define the system's
behavior. A rule generally follows the format "IF condition THEN action." For example,
in an expert system for medical diagnosis, a rule might be "IF patient has fever AND
cough THEN consider flu."
2. Knowledge Base: This is the repository where all the rules and facts are stored. The
knowledge base is built from domain-specific knowledge and can be manually curated
or derived from expert input.
3. Inference Engine: The inference engine is the component that applies the rules to the
knowledge base to derive conclusions or make decisions. It interprets the rules,
processes them against the current facts or data, and determines the appropriate actions
or outputs.
4. Working Memory: This is a dynamic component that holds the current facts or data
being processed by the system. It is updated as the inference engine applies rules and
new information becomes available.
5. User Interface: In many rule-based systems, the user interface allows users to interact
with the system, input data, and receive outputs or recommendations.
6.

Rule Based System in AI

How Rule-Based Systems Work?

The operation of a rule-based system involves several stages:


1. Data Input: The system receives input data from the user or another source. This data
can range from simple numerical values to complex information like patient symptoms
or transaction records.
2. Rule Matching: The inference engine examines the input data against the rules stored
in the knowledge base. It looks for rules whose conditions match the input data.
3. Rule Execution: Once a rule is matched, the inference engine executes the
corresponding action. This might involve updating the working memory, deriving new
facts, or generating an output.
4. Conflict Resolution: In cases where multiple rules are triggered simultaneously, the
inference engine uses conflict resolution strategies to determine which rule to apply
first. Common strategies include prioritizing rules based on specificity or order of entry.
5. Output Generation: The system generates an output based on the executed rules. This
output can be a decision, recommendation, or another form of response. For example, in
a medical diagnosis system, the output might be a suggested treatment plan.
XCON

XCon stands for expert configure . An expert configurer in artificial intelligence refers to a
specialized AI system designed to automate the process of configuring complex systems
based on specific requirements. This type of system typically encodes expert knowledge in a
domain (like hardware setup, software deployment, or product customization) to generate
solutions that are optimized, compatible, and tailored to customer needs. Here’s an overview
of expert configurers in AI, how they work, and their applications.

How Expert Configurers Work

1. Knowledge Representation: Expert configurers often use rule-based systems or


constraint-based reasoning to represent knowledge about the configuration options,
restrictions, dependencies, and relationships between components. Each rule or
constraint embodies domain expertise, guiding the system in choosing compatible and
valid configurations.
2. Inference Engine: The AI uses an inference engine that applies these rules or
constraints to customer requirements, often through techniques like forward chaining
or backward chaining. It may perform constraint satisfaction to ensure all selected
components are compatible, comply with regulations, or meet technical specifications.
3. Optimization: Some expert configurers go beyond simply satisfying constraints and
use optimization techniques to find configurations that best meet certain criteria, such
as cost efficiency, performance, or energy consumption.
4. User Interaction: Advanced configurers allow for interactive sessions, where the
user can input preferences or constraints, and the system dynamically adjusts its
recommendations. This allows users to explore various configuration options while
ensuring compatibility.

Rule Based Expert Systems

A Rule Looks at a Part of a State which Matches a Pattern and Modifies it to Result into
a New State.
• A Rule Associates an Action with a Pattern in the State known as Production.
• Unifying format for heuristic knowledge, business rules, and actions.
• Basis for a Turing complete programming language.
• The user or programmer only states the rules.
• Very popular in the business environment, e.g. lending by banks.
• The programmer only specifies the pattern action association.
• The programmer does not specify when an action should be executed.
• Different from imperative programming.
• Search Strategy
• Explores which rules are applicable, and
• Which one to apply
• User has a choice of strategies
• Working Memory(WM)
• Represents the current state. Contains a set of records or statements, known as working
memory elements (WMEs)
• A model of the short term memory (STM) of the problem solver.
• Each Production or A Rule.
• Represents a move
• Has a name or an identifier
• Has one or more pattern on the left hand side (LHS)
• Has one or more action on the right hand side (RHS)
• A model of the long term memory (LTM) of the problem solver.
• Inference Engine
• Matches patterns in all rules with all WMEs
• Picks one matching rule to fire or execute the actions in the rule
• Repeats the process till some termination criterion

Case Studies: Autonomous Vehicle

Autonomous vehicles (AVs) represent a transformative field in artificial intelligence, utilizing


machine learning, computer vision, sensor fusion, and real-time decision-making to enable
self-driving capabilities. Below are a few case studies showcasing how various companies
and research initiatives are developing autonomous vehicle technology using AI.

Tesla Autopilot and Full Self-Driving (FSD)


 Overview: Tesla’s Autopilot and FSD features aim to provide increasingly
autonomous driving capabilities through over-the-air software updates to its vehicles.
 Technology:
o Computer Vision: Unlike many other AV companies, Tesla relies primarily on
camera vision instead of LiDAR. Eight surround cameras, radar, and
ultrasonic sensors help the vehicle "see" the road.
o Neural Networks: Tesla uses a convolutional neural network (CNN) trained
on a vast amount of driving data to recognize lanes, traffic signs, signals,
pedestrians, and other vehicles.
o Self-Supervised Learning: Tesla uses real-world data collected from its entire
fleet to improve its neural network. When one Tesla vehicle encounters a
difficult situation, the system learns and updates all other Tesla vehicles.
 Outcomes: Tesla's FSD is still considered a driver-assistance system and not fully
autonomous, but it continues to improve as its neural network trains on billions of
miles of real-world data.

Uber ATG (Advanced Technologies Group)

 Overview: Uber began developing autonomous vehicles with a focus on applying


them to ride-sharing. The initiative faced several setbacks and was eventually sold to
Aurora Innovation in 2020.
 Technology:
o Multi-Sensor Fusion: Uber's AVs use a combination of LiDAR, cameras, and
radar to create high-resolution environmental maps.
o Behavioral Prediction Models: Uber ATG invested in predicting the behavior
of other road users to make the AV safer and more predictable.
o Advanced Mapping: Uber developed precise maps for its AVs to identify
road characteristics, navigate complex intersections, and recognize road signs
accurately.
 Outcomes: Although Uber ATG is now part of Aurora, the technology developed
through Uber’s efforts has contributed significantly to AV research, particularly in
handling unpredictable urban driving scenarios.

Medical Image Analysis


AI has revolutionized medical image analysis by enhancing diagnostic accuracy,
accelerating workflows, and enabling personalized treatment planning. Here are some notable
use cases of AI in medical image analysis:

Disease Detection and Diagnosis

 Use Case: Detecting diseases such as cancer, cardiovascular disease, neurological


disorders, and lung conditions.
 Examples:
o Breast Cancer Screening: AI models can analyze mammograms to identify
early signs of breast cancer, detecting even subtle abnormalities that might be
missed by the human eye. Google Health has shown that AI can reduce false
positives and false negatives, helping radiologists make more accurate
diagnoses.
o Lung Nodule Detection: AI systems trained on CT scans can identify and
classify lung nodules, a key step in early lung cancer detection. Companies
like Zebra Medical Vision and Infervision provide AI solutions that analyze
chest CTs, detecting nodules with high sensitivity.
 Impact: AI models reduce diagnostic errors, support early detection, and can help
prioritize high-risk cases for radiologists, ultimately improving patient outcomes

Brain Imaging for Neurological Disorders

 Use Case: Detecting and monitoring neurological conditions like Alzheimer’s disease,
multiple sclerosis (MS), and stroke.
 Examples:
o Alzheimer's Prediction: AI models analyzing MRI and PET scans can
identify early markers of Alzheimer’s disease, such as changes in brain
volume or amyloid plaque accumulation. Researchers use deep learning
models to detect subtle structural changes in the brain years before symptoms
appear.
o Stroke Detection: AI-powered platforms like Viz.ai analyze CT angiograms to
quickly identify large vessel occlusions (LVOs), speeding up stroke diagnosis
and enabling faster intervention.
 Impact: AI accelerates time-sensitive diagnoses, such as stroke, allowing faster
response times and potentially reducing long-term neurological damage.

Organ Segmentation and Surgical Planning

 Use Case: Segmenting organs and anatomical structures in medical images for
surgical planning and radiation therapy.
 Examples:
o Tumor Segmentation: AI-powered segmentation tools can identify tumor
boundaries in MRIs or CT scans. For instance, AI models can segment liver or
brain tumors, aiding surgeons in planning precise interventions and helping
radiologists in accurate dose targeting for radiation therapy.
o 3D Organ Reconstruction: AI models create 3D reconstructions of organs,
which can be particularly useful for complex surgeries. These reconstructions
allow surgeons to "navigate" around critical structures, improving outcomes in
surgeries such as spinal or cardiac procedures.
 Impact: AI in organ segmentation ensures higher precision in complex surgeries,
reduces surgical risks, and enhances radiation therapy targeting accuracy.

You might also like