0% found this document useful (0 votes)
40 views65 pages

AI Fundamentals for Tech Enthusiasts

AKTU university notes of ai subject

Uploaded by

Puneet Bhandari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views65 pages

AI Fundamentals for Tech Enthusiasts

AKTU university notes of ai subject

Uploaded by

Puneet Bhandari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

INTRODUCTION : Introduction–Definition – Future of Artificial Intelligence –

Characteristics of Intelligent Agents– Typical Intelligent Agents – Problem


Solving Approach to Typical AI problems. 08 II PROBLEM SOLVING METHODS:
Problem solving Methods – Search Strategies- Uninformed – Informed –
Heuristics – Local Search Algorithms and Optimization Problems – Searching
with Partial Observations – Constraint Satisfaction Problems – Constraint
Propagation – Backtracking Search – Game Playing – Optimal Decisions in
Games – Alpha – Beta Pruning – Stochastic Games

What is Artificial Intelligence (AI)?


In today's world, technology is growing very fast, and we are getting in
touch with different new technologies day by day.

Here, one of the booming technologies of computer science is Artificial


Intelligence which is ready to create a new revolution in the world by
making intelligent machines. The Artificial Intelligence is now all around
us. It is currently working with a variety of subfields, ranging from general
to specific, such as self-driving cars, playing chess, proving theorems,
playing music, Painting, etc.

AI is one of the fascinating and universal fields of Computer science which has a great scope
in future. AI holds a tendency to cause a machine to work as a human.

Artificial Intelligence is composed of two words Artificial and Intelligence, where


Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a
man-made thinking power."

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent machines which
can behave like a human, think like humans, and able to make decisions."

Artificial Intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some work,
despite that you can create a machine with programmed algorithms which can work with own
intelligence, and that is the awesomeness of AI.

It is believed that AI is not a new technology, and some people says that
as per Greek myth, there were Mechanical men in early days which can
work and behave like humans.
The 4 Types of AI
1. Reactive machines

Reactive machines are the most basic type of artificial intelligence. Machines built
in this way don’t possess any knowledge of previous events but instead only
“react” to what is before them in a given moment. As a result, they can only
perform certain advanced tasks within a very narrow scope, such as playing
chess, and are incapable of performing tasks outside of their limited context.
2. Limited memory machines

Machines with limited memory possess a limited understanding of past events.


They can interact more with the world around them than reactive machines can.
For example, self-driving cars use a form of limited memory to make turns,
observe approaching vehicles, and adjust their speed. However, machines with
only limited memory cannot form a complete understanding of the world
because their recall of past events is limited and only used in a narrow band of
time.
3. Theory of mind machines

Machines that possess a “theory of mind” represent an early form of artificial


general intelligence. In addition to being able to create representations of the
world, machines of this type would also have an understanding of other entities
that exist within the world. As of this moment, this reality has still not
materialized.
4. Self-aware machines

Machines with self-awareness are the theoretically most advanced type of AI and
would possess an understanding of the world, others, and itself. This is what
most people mean when they talk about achieving AGI. Currently, this is a far-off
reality.
Characteristics of Artificial Intelligence

The ideal characteristic of artificial intelligence its ability to rationalize and


take actions that have the best chance of achieving a specific goal.
However the term artificial intelligence can be applied to any machine
that exhibits traits associated with a human mind, such as learning and
solving problems. Following are the characteristics of artificial intelligence:

1. Symbolic Processing

2. Non-algorithmic Processing

3. Reasoning

4. Perception
5. Communication

6. Ability to Learn
7. Imprecise knowledge
8. Planning
9. Fast decision making
1. Symbolic Processing:
In AI applications, computers process symbols rather than numbers or
letters. AI applications process strings of characters that represent real-
world entities or concepts. Symbols can be arranged in structures such as
lists, hierarchies, or networks. These structures show how symbols relate
to each other.

2. Non-algorithmic Processing :
Computer programs outside the AI domain are programmed algorithms;
that is, fully specified step-by step procedures that define a solution to the
problem. The actions of a knowledge-based AI system depend to a
far greater degree on the situation where it is used.
3. Reasoning:
Reasoning is the ability to solve problems through logical deduction. The
tern artificial intelligence applies to a machine that can reason. It involves
solving problems through logical deduction or induction.
4. Perception
Perception is the ability to deduce things about the world from visual
images, sounds and other sensory inputs. It involves deducing things about
the world from visual images, sounds and other sensory inputs.

5. Communication
Communication is the ability to understand written and spoken language. It
involves the ability to communicate in human language, understanding
people’s intentions
and emotions through natural language processing techniques.

6. Ability to Learn:
Al programs have an ability to learn. Conventional systems have not
achieved that level till now.

7. Imprecise knowledge
An AI program needs an imprecise or general knowledge whereas a
conventional program needs a precise or specific knowledge.
8. Planning:

Planning is the ability to set and achieve goals. It involves setting and
achieving goals through sequences of actions. Sequences of actions can
be undertaken that will affect progress towards achieving the goals.

9. Fast decision making:


Al has a powerful role to play in making real-world decisions. Even many of
the most innovative organizations in the world such as Facebook, Google
and Amazon rely on AI algorithms as part of their decision making process.
AI is capable of handling many different factors at once when making
complex decisions, can process much more data at once, and use
probability to suggest or implement the best possible decision.

Application areas of A.I


Every branch of science, engineering and technology shares the tools and
techniques available in the domain of artificial intelligence. Application
areas of AI are classified as:
1. Expert Systems
2. Natural Language Programs (NLP)
3. Speech Recognition

4. Industrial Automation and Manufacturing


5. Robotics
6. Game Playing
7. Image Understanding and Computer Vision
8. Machine Learning
1. Expert Systems
An expert system is a software package that accumulates the knowledge
and decision making capabilities of the specialists in a particular field. An
expert system is a set of programs that manipulate encoded knowledge to
solve problems in a specialized domain that normally requires human
expertise.
Expert systems are automatic systems. They respond as an intelligent
assistant by giving advice and suggesting possible decisions. Expert
systems have been built that can diagnose faults in military systems like
aircrafts, radars etc., taxonomically classify members of a particular
species, advice on possible chemical structures,
diagnose diseases etc.
It is also known as Knowledge Based System. It uses a knowledge base for
its artificial intelligence. It also uses the decision rules of human specialists
to arrive at certain conclusions and to give recommendations.

The primary goal of expert systems research is to make expertise available


to decision makers and technicians Who need answers quickly. These
knowledge-based applications of artificial intelligence have enhanced

productivity in business, science, engineering and the military.

The first and firm application of AI was the design and development of the
expert system named MYCIN at Stanford University in the mid 1970s.

2. Natural Language Programs (NLP)


One of the long standing goals of artificial intelligence is the creation of
programs that are understanding and generating human [Link]
instruct a computer to do a certain task, we have to use one or the other
programming language to enable us to communicate with the computer.
Learning the syntax and rules of the programming language is required.
This is required for every new language to be mastered. If we could instruct
the computer in our own language, it would be very very convenient. And
this is the Natural Language Programming (NLP). Such software
allows users to instruct the computer for performing any task through plain
English, French, Spanish or any other

natural language instructions.

A Natural Language Program interprets the parts of speech, the meaning of


a sentence and then converts the sentence/commands in the computer’s
own language that it can understand and process. Such program
transforms

sentences occurring as part of a dialog into data structures which convey


the intended meaning of the sentences.

3. Speech Recognition

Speech recognition provides a means by which speech is broken down into


the individual word tokens. Discrete speech recognition is used widely
today, presenting a minor inconvenience to the user.

4. Industrial Automation and Manufacturing


Industrial Automation is concerned with introduction, incorporation and
intervention of automation, starting from raw material handling, planning
and production of items and their assembly leading to higher
capable equipment and their inventory. Automation means less human
interference, more computing with knowledge

enriched environment. Flexible manufacturing system has emerged to tie


up with AI for industrial automation in most of the leading and pioneering
industries in the world.

5. Robotics
Robotics is one of the prime areas of AI applications. AI methodology is
applicable to robotics in two ways: One is design and control of robot and
the other is application of robots to various fields such as manufacturing,
mining, medicine (surgery). Thus, robotics is a field of Al which enhances
the ability to move and act in the world, possibly responding to new
perceptions.

The application of robotic system is highly useful in places where human


movement is restricted as in nuclear plant or mines. Robotic systems work
intelligently in such places and atmosphere where hazards prevail
for humans with high probability.

6. Game Playing :
Game playing is one of the oldest areas of attempt in artificial intelligence.
A chess-playing computer is a proof of a machine doing something using
intelligence. Also, the simplicity of the rules and their application in the
program implies that it is easy to represent the game. as a search through
a space of possible game positions.
Thus, game playing is one of the leading domains where Al has been
applied with great success.

7. Machine Learning:
Machine learning is subfield of artificial intelligence that is concerned with
the design and development of algorithms and techniques that
allow computers to learn’. The major focus of machine
learning research is to extract information from data
automatically, by computational and statistical methods.

Importance of Artificial Intelligence (AI)


AI is one of the important developments of world. It will surely affect the
lives of human in the coming days. It is important in the development of
computer products, robotics and related fields. In fact, AI is growing in
importance everyday. The field of AI is sure to bring in vast opportunities.
The importance of AI can be estimated from the following viewpoints:
1. Major companies are investing big in AI
2. Rapid Advancement

3. Vast opportunities
4. Smarter Software
5. Robotics is developing rapidly
6. AI is essential to overcome information Challenges
1. Major companies are investing big in AI:
The large hi-tech companies like IBM, Google, Microsoft, Ebay, Yahoo
have multiple active developments underway. They seem to be in a race to
acquire systems and so hire inventors and creative minds. DARPA has
also been funding the development of ALV (Autonomous Land Vehicle)
which is a driverless military vehicle and the development of an expert
system to assist pilots.
2. Rapid Advancement
As the costs of hardware are falling rapidly, AI is becoming a leading
choice. High speed Graphical Processing Units (GPU’s) can recognize
about 88% of the words spoken in normal, human, English language
conversation. With such trends, hardware costs will no longer be a barrier
to let Al industry accelerate.
3. Vast opportunities
Al offers vast opportunity space. In future AI will power vision systems that
would drive your car, a vacuum cleaner that will recognize the furniture and
dirt on it and an autonomous lawnmower that will mow the lawn without
disturbing the flowers. Just count on a future when Al will be used as a
utility and in everyday objects.
4. Smarter Software
Al will increasingly be made a part of other software programs. Al enabled
software will serve as virtual assistants, providing hints and previews of
processes and help speed up and maximize tasks.
5. Robotics is developing rapidly
Now advanced robots are gaining enhanced senses, skills and intelligence.
It is all credited to AI, machine to machine communication and sensors etc.
Industrial robots are already there to take up difficult and dangerous tasks
such as welding, spray painting etc.
6. AI is essential to overcome information challenges
The information growth is becoming so challenging and speedy that human
need intelligence aided computer systems for data exploration. In today’s
world where information is exploding at an exponential rate, human
expertise becomes limited in certain domains.

AI agent:

AI agents have been around since the 1980s when computer


scientists began exploring how to develop smart software
that could interact like humans. Since then, the concept has
evolved to include AI agents that can make decisions and
complete tasks independently.

An AI agent is a software program designed to interact with


its environment, perceive the data it receives, and take
actions based on that data to achieve specific goals. AI
agents simulate intelligent behavior, and they can be as
simple as rule-based systems or as complex as advanced
machine learning models. They use predetermined rules or
trained models to make decisions and might need external
control or supervision.

An autonomous AI agent is an advanced software program


that can operate independently without human control. It can
think, act, and learn on its own, without needing constant
input from humans. These agents are widely used in different
industries, like healthcare, finance, and banking, to make
things run smoother and more efficiently. They can adjust to
new situations, learn from their experiences, and make
decisions using their own internal systems.

For example:

 AutoGPT is an AI agent that can generate human-like


text responses. It can comprehend the context of the
conversation and generate relevant responses
accordingly.
 BabyAGI is an autonomous AI agent that can
independently learn and perform tasks like
understanding natural language, analyzing images,
identifying objects, following simple commands, etc.
 AgentGPT is an intelligent virtual agent designed to
interact with customers and provide them with
personalized recommendations. It can understand
natural language and provide relevant responses
based on customer queries.
Both AI tools and AI agents can perform tasks autonomously
to an extent. So what sets them apart? What qualifies an AI
tool as an agent? Let’s find out!

Characteristics of an AI agent
While AI tools and agents are software programs designed to
automate tasks, specific key characteristics differentiate AI
agents as more sophisticated AI software.

You can consider an AI tool as an AI agent when it has the


following characteristics:

 Autonomy: An AI virtual agent is capable of


performing tasks independently without requiring
constant human intervention or input.
 Perception: The agent function senses and interprets
the environment they operate in through various
sensors, such as cameras or microphones.
 Reactivity: An AI agent can assess the environment
and respond accordingly to achieve its goals.
 Reasoning and decision-making: AI agents are
intelligent tools that can analyze data and make
decisions to achieve goals. They use reasoning
techniques and algorithms to process information and
take appropriate actions.
 Learning: They can learn and enhance their
performance through machine, deep, and
reinforcement learning elements and techniques.
 Communication: AI agents can communicate with
other agents or humans using different methods, like
understanding and responding to natural language,
recognizing speech, and exchanging messages
through text.
 Goal-oriented: They are designed to achieve specific
goals, which can be pre-defined or learned through
interactions with the environment.
Understanding
the structure of an AI agent
At its core, an AI agent is made up of four components: the
environment, sensors, actuators, and the decision-making
mechanism.

1. Environment
The environment refers to the area or domain in which an AI
agent operates. It can be a physical space, like a factory
floor, or a digital space, like a website.

2. Sensors
Sensors are the tools that an AI agent uses to perceive its
environment. These can be cameras, microphones, or any
other sensory input that the AI agent can use to understand
what is happening around it.

3. Actuators
Actuators are the tools that an AI agent uses to interact with
its environment. These can be things like robotic arms,
computer screens, or any other device the AI agent can use
to change the environment.

4. Decision-making mechanism
A decision-making mechanism is the brain of an AI agent. It
processes the information gathered by the sensors and
decides what action to take using the actuators. The
decision-making mechanism is where the real magic
happens.

AI agents use various decision-making mechanisms, such as


rule-based systems, expert systems, and neural networks, to
make informed choices and perform tasks effectively.

5. Learning system
The learning system enables the AI agent to learn from its
experiences and interactions with the environment. It uses
techniques like reinforcement learning, supervised learning,
and unsupervised learning to improve the performance of the
AI agent over time.

By understanding the environment, sensors, actuators, and


decision-making mechanisms, developers can create AI
agents to perform specific tasks accurately and efficiently. As
AI technology evolves, we can expect new types of AI agents
with even more sophisticated structures and capabilities.

With the AI agent definition and intelligent agent purpose


known to you, it’s time to dive deeper into the agent function
and analyze how an AI agent works in our upcoming section.

How does an AI agent work?


An AI agent works by perceiving its environment, processing
information, and taking action to achieve specific goals or
tasks. The process usually involves the following steps:
Step 1: Perceiving the environment
An autonomous AI agent first needs to gather information
about its environment. It can do so using sensors or
collecting data from various sources.

Step 2: Processing input data


The agent takes the knowledge gathered in Step 1 and
prepares it for processing. This may include organizing the
data, creating a knowledge base, or making internal
representations that the agent can understand and work
with.

Step 3: Decision-making
The agent uses reasoning techniques like logic or statistical
analysis to make an informed decision based on its
knowledge base and goals. This can involve applying pre-
determined rules or machine learning algorithms.

Step 4: Planning and executing an


action
The agent makes a plan or a series of steps to reach its
goals. This may involve creating a step-by-step strategy,
optimizing resource allocation, or considering various
limitations and priorities. Based on its plan, the agent
executes all the steps to achieve the desired goal. It can also
receive feedback or new information from the environment,
which can be used to adjust its future actions or update its
knowledge base.

Step 5: Learning and Improvement


After taking action, the agent can learn from its own
experiences. This feedback loop allows the agent to improve
performance and adapt to new situations and environments.

What are the type of AI agents?


Agents can be grouped into five classes based on their degree of
perceived intelligence and capability. All these agents can improve their
performance and generate better action over the time. These are given
below:

o Simple Reflex Agent


o Model-based reflex agent
o Goal-based agents
o Utility-based agent
o Learning agent

1. Simple Reflex agent:


o The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of
the percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts
history during their decision and action process.
o The Simple reflex agent works on Condition-action rule, which
means it maps the current state to action. Such as a Room Cleaner
agent, it works only if there is dirt in the room.
o Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the
current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
2. Model-based reflex agent
o The Model-based agent can work in a partially observable
environment, and track the situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the
world," so it is called a Model-based agent.
o Internal State: It is a representation of the current state
based on percept history.
o These agents have the model, "which is knowledge of the world"
and based on the model they perform actions.
o Updating the agent state requires information about:

o How the world evolves

o How the agent's action affects the world.


3. Goal-based agents
o The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
o The agent needs to know its goal which describes desirable
situations.
o Goal-based agents expand the capabilities of the model-based
agent by having the "goal" information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and
planning, which makes an agent proactive.
4. Utility-based agents
o These agents are similar to the goal-based agent but provide an
extra component of utility measurement which makes them
different by providing a measure of success at a given state.
o Utility-based agent act based not only goals but also the best way to
achieve the goal.
o The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the
best action.
o The utility function maps each state to a real number to check how
efficiently each action achieves the goals.

5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its
past experiences, or it has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
o A learning agent has mainly four conceptual components, which are:

1. Learning element: It is responsible for making


improvements by learning from environment
2. Critic: Learning element takes feedback from critic which
describes that how well the agent is doing with respect to a
fixed performance standard.

3. Performance element: It is responsible for selecting


external action

4. Problem generator: This component is responsible for


suggesting actions that will lead to new and informative
experiences.

o Hence, learning agents are able to learn, analyze performance, and


look for new ways to improve the performance.

What is an intelligent agent?


An intelligent agent is a software program that is able to
autonomously make decisions or take actions in order to achieve a
specific goal. In artificial intelligence, intelligent agents are
commonly used to solve complex tasks that are difficult or
impossible for humans to do.

One of the most famous examples of an intelligent agent is the


computer program Deep Blue, which was developed by IBM in the
1990s. Deep Blue was designed to play chess and it became the
first computer program to beat a world champion chess player,
Garry Kasparov, in a match in 1997.

What are the characteristics of an intelligent agent?


An intelligent agent is a computer program that is able to
autonomously make decisions and take actions in order to achieve
a goal.

There are four main characteristics of an intelligent agent:

1. Autonomy: An intelligent agent is autonomous, meaning it can


act independently without human intervention.

2. Proactivity: An intelligent agent is proactive, meaning it takes


initiative to achieve its goals.

3. Social ability: An intelligent agent is able to interact with other


agents and humans in order to achieve its goals.

4. Reasoning: An intelligent agent is able to reason and make


decisions based on its goals.

How do intelligent agents work?


Intelligent agents are a type of artificial intelligence that are used to
interact with humans or other intelligent agents to complete tasks.
There are three main components to an intelligent agent:

1. Sensors: These are used to gather information about the


environment.

2. Actuators: These are used to take actions in the environment.

3. A reasoning system: This is used to process the information from


the sensors and decide what actions to take.
What are the benefits of using intelligent agents?
There are many benefits of using intelligent agents in AI. One
benefit is that they can help you to automate tasks. For example,
you can set up an intelligent agent to monitor your email and
respond to certain types of messages automatically. This can save
you a lot of time and hassle.

Another benefit of using intelligent agents is that they can help you
to make better decisions. This is because they can gather and
process information more effectively than humans. This means that
they can help you to identify patterns and trends that you might
otherwise miss.

Overall, intelligent agents can save you time and help you to make
better decisions. They are an essential tool for anyone working with
AI.

What are some challenges associated with


intelligent agents?
One of the key challenges associated with intelligent agents is the
so-called "frame problem". This is the problem of how an agent can
reason about the effects of its actions when those effects are not
fully known or understood. This can be a difficult problem to solve,
especially when the agent is operating in a complex and dynamic
environment.

Another challenge associated with intelligent agents is the issue of


scalability. As agents become more intelligent and capable, they
also tend to become more resource-intensive. This can make it
difficult to deploy them on a large scale or to use them in real-time
applications.
Finally, another challenge associated with intelligent agents is the
issue of trust. As agents become more powerful, there is a risk that
they could be used for malicious purposes. This means that it is
important to design them in a way that makes it difficult for them to
be misused Search Algorithms.
Difference Between Artificial Intelligence vs
Machine Learning vs Deep Learning
Artificial
Intelligence Machine Learning Deep Learning

DL stands for Deep


AI stands for
Learning, and is the
Artificial ML stands for
study that makes use
Intelligence, and is Machine Learning,
of Neural
basically the and is the study that
Networks(similar to
study/process which uses statistical
neurons present in
enables machines methods enabling
human brain) to
to mimic human machines to improve
imitate functionality
behaviour through with experience.
just like a human
particular algorithm.
brain.

AI is the broader
family consisting of DL is the subset of
ML is the subset of AI.
ML and DL as it’s ML.
components.

DL is a ML algorithm
AI is a computer
that uses deep(more
algorithm which ML is an AI algorithm
than one layer) neural
exhibits intelligence which allows system
networks to analyze
through decision to learn from data.
data and provide
making.
output accordingly.
Artificial
Intelligence Machine Learning Deep Learning

If you are clear about


If you have a clear the math involved in
idea about the it but don’t have idea
logic(math) involved about the features, so
in behind and you can you break the
Search Trees and
visualize the complex complex
much complex math
functionalities like K- functionalities into
is involved in AI.
Mean, Support Vector linear/lower
Machines, etc., then it dimension features by
defines the ML adding more layers,
aspect. then it defines the DL
aspect.

It attains the highest


The aim is to The aim is to increase
rank in terms of
basically increase accuracy not caring
accuracy when it is
chances of success much about the
trained with large
and not accuracy. success ratio.
amount of data.

DL can be considered
as neural networks
with a large number
Three broad
Three broad of parameters layers
categories/types Of
categories/types Of lying in one of the
AI are: Artificial
ML are: Supervised four fundamental
Narrow Intelligence
Learning, network architectures:
(ANI), Artificial
Unsupervised Unsupervised Pre-
General Intelligence
Learning and trained Networks,
(AGI) and Artificial
Reinforcement Convolutional Neural
Super Intelligence
Learning Networks, Recurrent
(ASI)
Neural Networks and
Recursive Neural
Networks

The efficiency Of AI Less efficient than DL


More powerful than
is basically the as it can’t work for
ML as it can easily
efficiency provided longer dimensions or
work for larger sets of
by ML and DL higher amount of
data.
respectively. data.

Examples of AI Examples of ML Examples of DL


Artificial
Intelligence Machine Learning Deep Learning

applications
include: Google’s
applications include: applications include:
AI-Powered
Virtual Personal Sentiment based
Predictions,
Assistants: Siri, Alexa, news aggregation,
Ridesharing Apps
Google, etc., Email Image analysis and
Like Uber and Lyft,
Spam and Malware caption generation,
Commercial Flights
Filtering. etc.
Use an AI Autopilot,
etc.

AI refers to the
broad field of
computer science ML is a subset of AI
that focuses on that focuses on DL is a subset of ML
creating intelligent developing algorithms that focuses on
machines that can that can learn from developing deep
perform tasks that data and improve neural networks that
would normally their performance can automatically
require human over time without learn and extract
intelligence, such as being explicitly features from data.
reasoning, programmed.
perception, and
decision-making.

ML algorithms can be
categorized as
supervised,
unsupervised, or
reinforcement
AI can be further DL algorithms are
learning. In
broken down into inspired by the
supervised learning,
various subfields structure and function
the algorithm is
such as robotics, of the human brain,
trained on labeled
natural language and they are
data, where the
processing, particularly well-
desired output is
computer vision, suited to tasks such
known. In
expert systems, and as image and speech
unsupervised
more. recognition.
learning, the
algorithm is trained
on unlabeled data,
where the desired
output is unknown.
Artificial
Intelligence Machine Learning Deep Learning

DL networks consist
of multiple layers of
In reinforcement
interconnected
learning, the
AI systems can be neurons that process
algorithm learns by
rule-based, data in a hierarchical
trial and error,
knowledge-based, manner, allowing
receiving feedback in
or data-driven. them to learn
the form of rewards
increasingly complex
or punishments.
representations of the
data.

Search Algorithms:
Artificial Intelligence is the study of building agents that act
rationally. Most of the time, these agents perform some kind of
search algorithm in the background in order to achieve their tasks.
 A search problem consists of:
 A State Space. Set of all possible states where
you can be.
 A Start State. The state from where the search
begins.
 A Goal State. A function that looks at the current
state returns whether or not it is the goal state.
 The Solution to a search problem is a sequence of actions,
called the plan that transforms the start state to the goal
state.
 This plan is achieved through search algorithms.

Types of search algorithms:

There are far too many powerful search algorithms out there to fit
in a single article. Instead, this article will discuss six of the
fundamental search algorithms, divided into two categories, as
shown below.
Uninformed Search Algorithms:

The search algorithms in this section have no additional


information on the goal node other than the one provided in the
problem definition. The plans to reach the goal state from the start
state differ only by the order and/or length of actions. Uninformed
search is also called Blind search. These algorithms can only
generate the successors and differentiate between the goal state
and non goal state.
The following uninformed search algorithms are discussed in this
section.
1. Depth First Search
2. Breadth First Search
3. Uniform Cost Search
Each of these algorithms will have:
 A problem graph, containing the start node S and the goal
node G.
 A strategy, describing the manner in which the graph will
be traversed to get to G.
 A fringe, which is a data structure used to store all the
possible states (nodes) that you can go from the current
states.
 A tree, that results while traversing to the goal node.
 A solution plan, which the sequence of nodes from S to G.
1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a
tree or graph. This algorithm searches breadthwise in a tree or graph, so it
is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands
all successor node at the current level before moving to nodes of next
level.
o The breadth-first search algorithm is an example of a general-graph
search algorithm.
o Breadth-first search implemented using FIFO queue data structure.

Advantages:

o BFS will provide a solution if any solution exists.


o If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.

Disadvantages:

o It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.

Example:
In the below tree structure, we have shown the traversing of the tree
using BFS algorithm from the root node S to goal node K. BFS search
algorithm traverse in layers, so it will follow the path which is shown by
the dotted arrow, and the traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by
the number of nodes traversed in BFS until the shallowest Node. Where
the d= depth of shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the


Memory size of frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal


node is at some finite depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the


depth of the node.

2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph
data structure.
o It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next
path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions


using recursion.

Advantage:
o DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to
the infinite loop.

Example:
In the below search tree, we have shown the flow of depth-first search,
and it will follow the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and
E, after traversing E, it will backtrack the tree as E has no other successor
and still goal node is not found. After backtracking it will traverse node C
and then G, and here it will terminate as it found goal node.

Completeness: DFS search algorithm is complete within finite state


space as it will expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much


larger than d (Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from
the root node, hence space complexity of DFS is equivalent to the size of
the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large


number of steps or high cost to reach to the goal node.

3. Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the
infinite path in the Depth-first search. In this algorithm, the node at the
depth limit will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any
solution.
o Cutoff failure value: It defines no solution for the problem within a given
depth limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.
Example:

Completeness: DLS search algorithm is complete if the solution is above


the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS,


and it is also not optimal even if ℓ>d.

4. Uniform-cost Search Algorithm:


Uniform-cost search is a searching algorithm used for traversing a
weighted tree or graph. This algorithm comes into play when a different
cost is available for each edge. The primary goal of the uniform-cost
search is to find a path to the goal node which has the lowest cumulative
cost. Uniform-cost search expands nodes according to their path costs
form the root node. It can be used to solve any graph/tree where the
optimal cost is in demand. A uniform-cost search algorithm is
implemented by the priority queue. It gives maximum priority to the
lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm
if the path cost of all edges is the same.

Advantages:

o Uniform cost search is optimal because at every state the path with the
least cost is chosen.
Disadvantages:

o It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in
an infinite loop.

Example:

Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will


find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer
to the goal node. Then the number of steps is = C*/ε+1. Here we have
taken +1, as we start from state 0 and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 +

[C*/ε]
)/.

Space Complexity:

The same logic is for space complexity so, the worst-case space
complexity of Uniform-cost search is O(b1 + [C*/ε]).

Optimal:
Uniform-cost search is always optimal as it only selects a path with the
lowest path cost.

5. Iterative deepeningdepth-first Search:


The iterative deepening algorithm is a combination of DFS and BFS
algorithms. This search algorithm finds out the best depth limit and does it
by gradually increasing the limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit",


and it keeps increasing the depth limit after each iteration until the goal
node is found.

This Search algorithm combines the benefits of Breadth-first search's fast


search and depth-first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search


space is large, and depth of goal node is unknown.

Advantages:

o Itcombines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the previous
phase.

Example:
Following tree structure is showing the iterative deepening depth-first
search. IDDFS algorithm performs various iterations until it does not find
the goal node. The iteration performed by the algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is ifthe branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case
time complexity is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:

IDDFS algorithm is optimal if path cost is a non- decreasing function of the


depth of the node.

6. Bidirectional Search Algorithm:


Bidirectional search algorithm runs two simultaneous searches,
one form initial state called as forward-search and other from
goal node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when
these two graphs intersect each other.

Bidirectional search can use search techniques such as BFS, DFS,


DLS, etc.

Advantages:

o Bidirectional search is fast.


o Bidirectional search requires less memory

Disadvantages:

o Implementation of the bidirectional search tree is difficult.


o In bidirectional search, one should know the goal state in
advance.

Example:
In the below search tree, bidirectional search algorithm is applied. This
algorithm divides one graph/tree into two sub-graphs. It starts traversing
from node 1 in the forward direction and starts from goal node 16 in the
backward direction.

The algorithm terminates at node 9 where two searches meet.


Completeness: Bidirectional Search is complete if we use BFS in both
searches.

Time Complexity: Time complexity of bidirectional search using BFS


is O(bd).

Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

Informed Search Algorithms:

Heuristics function: Heuristic is a function which is used in Informed


Search, and it finds the most promising path. It takes the current state of
the agent as its input and produces the estimation of how close agent is
from the goal. The heuristic method, however, might not always give the
best solution, but it guaranteed to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal. It is
represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence
heuristic cost should be less than or equal to the estimated cost.

Pure Heuristic Search:


Pure heuristic search is the simplest form of heuristic search algorithms. It
expands nodes based on their heuristic value h(n). It maintains two lists,
OPEN and CLOSED list. In the CLOSED list, it places those nodes which
have already expanded and in the OPEN list, it places nodes which have
yet not been expanded.

On each iteration, each node n with the lowest heuristic value is expanded
and generates all its successors and n is placed to the closed list. The
algorithm continues unit a goal state is found.

In the informed search we will discuss two main algorithms which are
given below:

o Best First Search Algorithm(Greedy search)


o A* Search Algorithm
1.) Best-first Search Algorithm (Greedy Search):
Greedy best-first search algorithm always selects the path which appears
best at that moment. It is the combination of depth-first search and
breadth-first search algorithms. It uses the heuristic function and search.
Best-first search allows us to take the advantages of both algorithms. With
the help of best-first search, at each step, we can choose the most
promising node. In the best first search algorithm, we expand the node
which is closest to the goal node and the closest cost is estimated by
heuristic function, i.e.

1. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:


o Step 1: Place the starting node into the OPEN list.
o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest
value of h(n), and places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a
goal node or not. If any successor node is goal node, then return success
and terminate the search, else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:
o Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
o It can behave as an unguided depth-first search in the worst case
scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.

Example:
Consider the below search problem, and we will traverse it using greedy
best-first search. At each iteration, each node is expanded using
evaluation function f(n)=h(n) , which is given in the below table.

In this search example, we are using two lists which


are OPEN and CLOSED Lists. Following are the iteration for traversing the
above example.
Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first
search is O(bm).

Space Complexity: The worst case space complexity of Greedy best first
search is O(bm). Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given


state space is finite.

Optimal: Greedy best first search algorithm is not optimal.

2.) A* Search Algorithm:


A* search is the most commonly known form of best-first search. It uses
heuristic function h(n), and cost to reach the node n from the start state
g(n). It has combined features of UCS and greedy best-first search, by
which it solve the problem efficiently. A* search algorithm finds the
shortest path through the search space using the heuristic function. This
search algorithm expands less search tree and provides optimal result
faster. A* algorithm is similar to UCS except that it uses g(n)+h(n) instead
of g(n).

In A* search algorithm, we use search heuristic as well as the cost to


reach the node. Hence we can combine both costs as following, and this
sum is called as a fitness number.

At each point in the search space, only those node is expanded which have the lowest
value of f(n), and the algorithm terminates when the goal node is found.

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then
return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value
of evaluation function (g+h), if node n is goal node then return success
and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into
the closed list. For each successor n', check whether n' is already in the
OPEN or CLOSED list, if not then compute evaluation function for n' and
place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be
attached to the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:
o A* search algorithm is the best algorithm than other search algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:
o It does not always produce the shortest path as it mostly based on
heuristics and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale
problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the
f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to
reach any node from start state.
Here we will use OPEN and CLOSED list.

Solution:
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7),
(S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the


optimal path with cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search
for all remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:


o Admissible: the first condition requires for optimality is that h(n) should
be an admissible heuristic for A* tree search. An admissible heuristic is
optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-
search.

If the heuristic function is admissible, then A* tree search will always find
the least cost path.

Time Complexity: The time complexity of A* search algorithm depends


on heuristic function, and the number of nodes expanded is exponential to
the depth of solution d. So the time complexity is O(b^d), where b is the
branching factor.

Space Complexity: The space complexity of A* search algorithm


is O(b^d)

What is local search heuristic


algorithm?
Local search is a heuristic method that is widely used to find
approximate solutions to a variety of hard optimization problems.
The quality of the solution is evaluated based on a cost function
or objective function, and the goal is to find a solution that
minimizes or maximizes this function.

The basic idea behind a local search algorithm is to start from an


arbitrary initial solution and then iteratively apply local changes to
get a better solution, moving from one solution to another until a
stopping criterion is met. Typically, this criterion is when there are
no further improvements or a predefined number of iterations is
achieved. The solution obtained in the last iteration is an
approximation of the optimal solution to the problem.

At each iteration, local search explores the neighboring solutions


by making small changes to the current solution and searches for
the favorable transition to make. There are various methods for
defining a favorable solution as the next one from the list of
neighbor solutions. Here are some well known examples of such
methods: hill climbing, simulated annealing, and tabu search.
Hill Climbing Algorithm:
o Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing elevation/value to
find the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a
higher value.
o Hill climbing algorithm is a technique which is used for optimizing
the mathematical problems. One of the widely discussed examples
of Hill climbing algorithm is Traveling-salesman Problem in which we
need to minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good
immediate neighbor state and not beyond that.
o A node of hill climbing algorithm has two components which are
state and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search
tree or graph as it only keeps a single current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of


Generate and Test method. The Generate and Test method produce
feedback which helps to decide which direction to move in the
search space.
o Greedy approach: Hill-climbing algorithm search moves in the
direction which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it
does not remember the previous states

State-space Diagram for Hill Climbing:


The state-space landscape is a graphical representation of the hill-
climbing algorithm which is showing a graph between various states of
algorithm and Objective function/Cost.
On Y-axis we have taken the function which can be an objective function
or cost function, and state-space on the x-axis. If the function on Y-axis is
cost then, the goal of search is to find the global minimum and local
minimum. If the function of Y-axis is Objective function, then the goal of
the search is to find the global maximum and local maximum.

Different regions in the state space landscape:


Local Maximum: Local maximum is a state which is better than its
neighbor states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state


space landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is


currently present.

Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:


o Simple hill Climbing:
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:

1. Simple Hill Climbing:


Simple hill climbing is the simplest way to implement a hill climbing
algorithm. It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and set it as a
current state. It only checks it's one successor state, and if it finds better
than the current state, then move else be in the same state. This
algorithm has the following features:

o Less time consuming


o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:


o Step 1: Evaluate the initial state, if it is goal state then return success and
Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to
apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:

1. If it is goal state, then return success and quit.

2. Else if it is better than the current state then assign new state as a
current state.

3. Else if not better than the current state, then return to step2.

o Step 5: Exit.

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing
algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the goal
state. This algorithm consumes more time as it searches for multiple
neighbors

Algorithm for Steepest-Ascent hill climbing:


o Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not
change.

1. Let SUCC be a state such that any successor of the current state will
be better than it.

2. For each operator that applies to the current state:

I. Apply the new operator and generate a new state.

II. Evaluate the new state.

III. If it is goal state, then return it and quit, else compare it to


the SUCC.

IV. If it is better than SUCC, then set new state as SUCC.

V. If the SUCC is better than the current state, then set current
state to SUCC.

o Step 5: Exit.

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before
moving. Rather, this search algorithm selects one neighbor node at
random and decides whether to choose it as a current state or examine
another state.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape
which is better than each of its neighboring states, but there is another
state also present which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum


in state space landscape. Create a list of the promising path so that the
algorithm can backtrack the search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because of
this algorithm does not find any best direction to move. A hill-climbing
search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem. Randomly select a state
which is far away from the current state so it is possible that the algorithm
could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area


which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different


directions, we can improve this problem.
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower
value guaranteed to be incomplete because it can get stuck on a local
maximum. And if algorithm applies a random walk, by moving a
successor, then it may complete but not efficient. Simulated
Annealing is an algorithm which yields both efficiency and completeness.

In mechanical term Annealing is a process of hardening a metal or glass


to a high temperature then cooling gradually, so this allows the metal to
reach a low-energy crystalline state. The same process is used in
simulated annealing in which the algorithm picks a random move, instead
of picking the best move. If the random move improves the state, then it
follows the same path. Otherwise, the algorithm follows the path which
has a probability of less than 1 or it moves downhill and chooses another
path.

Adversarial Search in Artificial Intelligence:


AI Adversarial search: Adversarial search is a game-playing technique
where the agents are surrounded by a competitive environment. A conflicting
goal is given to the agents (multiagent). These agents compete with one
another and try to defeat one another in order to win the game. Such
conflicting goals give rise to the adversarial search. Here, game-playing
means discussing those games where human intelligence and logic
factor is used, excluding other factors such as luck factor. Tic-tac-toe,
chess, checkers, etc., are such type of games where no luck factor works,
only mind works.
Mathematically, this search is based on the concept of ‘Game Theory.’
According to game theory, a game is played between two players. To
complete the game, one has to win the game and the other looses
automatically.’
Techniques required to get the best optimal solution
There is always a need to choose those algorithms which provide the best
optimal solution in a limited time. So, we use the following techniques which
could fulfill our requirements:

 Pruning: A technique which allows ignoring the unwanted portions of a


search tree which make no difference in its final result.
 Heuristic Evaluation Function: It allows to approximate the cost value
at each level of the search tree, before reaching the goal node.

Elements of Game Playing search


To play a game, we use a game tree to know all the possible choices and to
pick the best one out. There are following elements of a game-playing:

 S0: It is the initial state from where a game begins.


 PLAYER (s): It defines which player is having the current turn to make a
move in the state.
 ACTIONS (s): It defines the set of legal moves to be used in a state.
 RESULT (s, a): It is a transition model which defines the result of a
move.
 TERMINAL-TEST (s): It defines that the game has ended and returns
true.
 UTILITY (s,p): It defines the final value with which the game has ended.
This function is also known as Objective function or Payoff function.
The price which the winner will get i.e.
 (-1): If the PLAYER loses.
 (+1): If the PLAYER wins.
 (0): If there is a draw between the PLAYERS.

For example, in chess, tic-tac-toe, we have two or three possible outcomes.


Either to win, to lose, or to draw the match with values +1,-1 or 0.
Let’s understand the working of the elements with the help of a game tree
designed for tic-tac-toe. Here, the node represents the game state and edges
represent the moves taken by the players.

A game-tree for tic-tac-toe

 INITIAL STATE (S0): The top node in the game-tree represents the
initial state in the tree and shows all the possible choice to pick out one.
 PLAYER (s): There are two players, MAX and MIN. MAX begins the
game by picking one best move and place X in the empty square box.
 ACTIONS (s): Both the players can make moves in the empty boxes
chance by chance.
 RESULT (s, a): The moves made by MIN and MAX will decide the
outcome of the game.
 TERMINAL-TEST(s): When all the empty boxes will be filled, it will be
the terminating state of the game.
 UTILITY: At the end, we will get to know who wins: MAX or MIN, and
accordingly, the price will be given to them.

Types of algorithms in Adversarial search


In a normal search, we follow a sequence of actions to reach the goal or to
finish the game optimally. But in an adversarial search, the result depends
on the players which will decide the result of the game. It is also obvious that
the solution for the goal state will be an optimal solution because the player
will try to win the game with the shortest path and under limited time.
There are following types of adversarial search:

 Minmax Algorithm
 Alpha-beta Pruning.

Minmax Algorithm:

Minimax is a kind of backtracking algorithm that is used in


decision making and game theory to find the optimal move for
a player, assuming that your opponent also plays optimally. It
is widely used in two player turn-based games such as Tic-Tac-
Toe, Backgammon, Mancala, Chess, etc.
In Minimax the two players are called maximizer and
minimizer. The maximizer tries to get the highest score
possible while the minimizer tries to do the opposite and get
the lowest score possible.
Every board state has a value associated with it. In a given
state if the maximizer has upper hand then, the score of the
board will tend to be some positive value. If the minimizer has
the upper hand in that board state then it will tend to be some
negative value. The values of the board are calculated by
some heuristics which are unique for every type of game.

Step-1: In the first step, the algorithm generates the entire game-tree
and apply the utility function to get the utility values for the terminal
states. In the below tree diagram, let's take A is the initial state of the
tree. Suppose maximizer takes first turn which has worst-case initial value
=- infinity, and minimizer will take next turn which has worst-case initial
value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial
value is -∞, so we will compare each value in terminal state with initial
value of Maximizer and determines the higher nodes values. It will find the
maximum among the all.

o For node D max(-1,- -∞) => max(-1,4)= 4


o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all
nodes value with +∞, and will find the 3 rd layer node values.

o For node B= min(4,6) = 4


o For node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the
maximum of all nodes value and find the maximum value for the root
node. In this game tree, there are only 4 layers, hence we reach
immediately to the root node, but in real games, there will be more than 4
layers.

o For node A max(4, -3)= 4


That was the complete workflow of the minimax two player game.

Properties of Mini-Max algorithm:


o Complete- Min-Max algorithm is Complete. It will definitely find a
solution (if exist), in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are
playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the
time complexity of Min-Max algorithm is O(bm), where b is
branching factor of the game-tree, and m is the maximum depth of
the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also
similar to DFS which is O(bm).

Limitation of the minimax Algorithm:


The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge
branching factor, and the player has lots of choices to decide. This
limitation of the minimax algorithm can be improved from alpha-beta
pruning which we have discussed in the next topic.

What is Alpha Beta pruning?


The minimax algorithm is optimised via alpha-beta pruning, which is
detailed in the next section. The requirement for pruning arose from the
fact that decision trees might become extremely complex in some
circumstances. Some superfluous branches in that tree add to the model’s
complexity. To circumvent this, Alpha-Beta pruning is used, which saves
the computer from having to examine the entire tree. The algorithm is
slowed by these atypical nodes. As a result of deleting these nodes, the
algorithm becomes more efficient.

Condition for Alpha-beta pruning


 Alpha: At any point along the Maximizer path, Alpha is the best
option or the highest value we’ve discovered. The initial value for
alpha is – ∞.
 Beta: At any point along the Minimizer path, Beta is the best
option or the lowest value we’ve discovered.. The initial value for
alpha is + ∞.
 The condition for Alpha-beta Pruning is that α >= β.
 The alpha and beta values of each node must be kept track of.
Alpha can only be updated when it’s MAX’s time, and beta can
only be updated when it’s MIN’s turn.
 MAX will update only alpha values and the MIN player will update
only beta values.
 The node values will be passed to upper nodes instead of alpha
and beta values during going into the tree’s reverse.
 Alpha and Beta values only are passed to child nodes.

Key points in Alpha-beta Pruning


 Alpha: Alpha is the best choice or the highest value that we
have found at any instance along the path of Maximizer. The
initial value for alpha is – ∞.
 Beta: Beta is the best choice or the lowest value that we have
found at any instance along the path of Minimizer. The initial
value for alpha is + ∞.
 The condition for Alpha-beta Pruning is that α >= β.
 Each node has to keep track of its alpha and beta values. Alpha
can be updated only when it’s MAX’s turn and, similarly, beta can
be updated only when it’s MIN’s chance.
 MAX will update only alpha values and MIN player will update
only beta values.
 The node values will be passed to upper nodes instead of values
of alpha and beta during go into reverse of tree.
 Alpha and Beta values only be passed to child nodes.
Working of Alpha-beta Pruning
1. We will first start with the initial move. We will initially define the
alpha and beta values as the worst case i.e. α = -∞ and β= +∞.
We will prune the node only when alpha becomes greater than or
equal to beta.

2. Since the initial value of alpha is less than beta so we didn’t prune it.
Now it’s turn for MAX. So, at node D, value of alpha will be calculated. The
value of alpha at node D will be max (2, 3). So, value of alpha at node D
will be 3.

3. Now the next move will be on node B and its turn for MIN now. So, at
node B, the value of alpha beta will be min (3, ∞). So, at node B values will
be alpha= – ∞ and beta will be 3.
In the next step, algorithms traverse the next successor of Node B which
is node E, and the values of α= -∞, and β= 3 will also be passed.

4. Now it’s turn for MAX. So, at node E we will look for MAX. The current
value of alpha at E is – ∞ and it will be compared with 5. So, MAX (- ∞, 5)
will be 5. So, at node E, alpha = 5, Beta = 5. Now as we can see that
alpha is greater than beta which is satisfying the pruning condition so we
can prune the right successor of node E and algorithm will not be
traversed and the value at node E will be 5.
6. In the next step the algorithm again comes to node A from node B. At
node A alpha will be changed to maximum value as MAX (- ∞, 3). So now
the value of alpha and beta at node A will be (3, + ∞) respectively and will
be transferred to node C. These same values will be transferred to node F.

7. At node F the value of alpha will be compared to the left branch which
is 0. So, MAX (0, 3) will be 3 and then compared with the right child which
is 1, and MAX (3,1) = 3 still α remains 3, but the node value of F will
become 1.

8. Now node F will return the node value 1 to C and will compare to beta
value at C. Now its turn for MIN. So, MIN (+ ∞, 1) will be 1. Now at node C,
α= 3, and β= 1 and alpha is greater than beta which again satisfies the
pruning condition. So, the next successor of node C i.e. G will be pruned
and the algorithm didn’t compute the entire subtree G.
Now, C will return the node value to A and the best value of A will be MAX
(1, 3) will be 3.

The above represented tree is the final tree which is showing the nodes
which are computed and the nodes which are not computed. So, for this
example the optimal value of the maximizer will be 3.

Look at open source Python Libraries.


Move Ordering in Pruning
The effectiveness of alpha – beta pruning is based on the order in which
node is examined. Move ordering plays an important role in alpha beta
pruning.

There are two types of move ordering in Alpha beta pruning:

1. Worst Ordering: In some cases of alpha beta pruning none of


the node pruned by the algorithm and works like standard
minimax algorithm. This consumes a lot of time as because of
alpha and beta factors and also not gives any effective results.
This is called Worst ordering in pruning. In this case, the best
move occurs on the right side of the tree.
2. Ideal Ordering: In some cases of alpha beta pruning lot of the
nodes pruned by the algorithm. This is called Ideal ordering in
pruning. In this case, the best move occurs on the left side of the
tree. We apply DFS hence it first search left of the tree and go
deep twice as minimax algorithm in the same amount of time.

Question Bank

o Explain DFS algorithm with suitable example.


 Explain the Alpha-Beta Pruning.
o . Explain the Search Algorithm Terminologies and types of Search
Algorithm.
o . How Many Different Kinds of Agents Exist in Artificial
Intelligence
o What is a depth-first search algorithm.
 List some applications of AI.
o What are Goals of AI.
o What is Heuristics function and where is it used?
o What is the role of Intelligent Agents and where are they used?
 Give the steps for A* algorithm?
 Explain the types of Artificial Intelligence.
 What is the Difference Between Artificial Intelligence, Machine
Learning, and Deep Learning

 What are the different components of the Expert System?

You might also like