0% found this document useful (0 votes)
66 views24 pages

TYBCA-SEM6-AI Seminar

This seminar report on Artificial Intelligence (AI) by Harsh R. Savaliya provides an overview of AI, its history, structure, and working principles. It discusses the evolution of AI from early rule-based systems to modern machine learning and deep learning techniques, highlighting its applications and potential societal impacts. The report also acknowledges the guidance received from faculty members at SDJ International College throughout the seminar process.

Uploaded by

faciw62813
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views24 pages

TYBCA-SEM6-AI Seminar

This seminar report on Artificial Intelligence (AI) by Harsh R. Savaliya provides an overview of AI, its history, structure, and working principles. It discusses the evolution of AI from early rule-based systems to modern machine learning and deep learning techniques, highlighting its applications and potential societal impacts. The report also acknowledges the guidance received from faculty members at SDJ International College throughout the seminar process.

Uploaded by

faciw62813
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Bachelor of Computer Application (BCA)

Programme

Seminar Report

BCA Sem VI
AY 2022-23

A.I. - Artificial Intelligence


by

Exam No. Roll No. Name of Student

3186 174 SAVALIYA HARSH RAKESHBHAI

Seminar Guide by:


Prof. Bhumika Patel
Acknowledgement

The Success and final outcome of this seminar required a lot of guidance and
assistance from many people and I am extremely fortunate to have got this all
along the completion of my seminar work. Whatever I have done is only due to
such guidance and assistance and I would not forget to thank them.

I owe our profound gratitude to our Director Mr. Deepak Vaidya, Coordinator
Mrs. Aditi Bhatt, Head of Department Mr. Vaibhav Desai and Seminar guide
Prof. Bhumika Patel and all other Assistant professors of SDJ International College,
who took teen interest on my Seminar work and guided me all along, till the
completion of my seminar work by providing all the necessary information for
presenting a good Concept. I am extremely grateful to them for providing such a
nice support and guidance thought they had busy schedule managing the college
affairs.

I am thankful and fortunate enough to get support and guidance form all Teaching
staffs of Bachelor of Computer Application Department which helped me in
successfully completing my seminar work. Also, I would like to extend my sincere
regards to all the non-teaching staff of Bachelor of Computer Application
Department for their timely support.

Harsh R. Savaliya
3186
INDEX

Sr. No Description Page No.

1 Overview 01

2 Introduction 02

3 History 03

4 Structure & Working 06

5 Layers 10

6 Types of AI 11

7 Advantages & Disadvantages 13

8 Examples 17

9 Conclusion 19

10 Reference 20

11 Bibliography 21
Artificial Intelligence

Overview of AI

Since the invention of computers or machines, their capability to perform various tasks went on
growing exponentially. Humans have developed the power of computer systems in terms of their
diverse working domains, their increasing speed, and reducing size with respect to time. A branch
of Computer Science named Artificial Intelligence pursues creating the computers or machines as
intelligent as human beings.
What is Artificial Intelligence?
According to the father of Artificial Intelligence John McCarthy, it is “The science and engineering
of making intelligent machines, especially intelligent computer programs”.
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software
think intelligently, in the similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work
while trying to solve a problem, and then using the outcomes of this study as a basis of developing
intelligent software and systems.
Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial
defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made
thinking power." So, we can define AI as: "It is a branch of computer science by which we can
create intelligent machines which can behave like a human, think like humans, and able to make
decisions."
Artificial Intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems With Artificial Intelligence you do not need to preprogram a
machine to do some work, despite that you can create a machine with programmed algorithms
which can work with own intelligence, and that is the awesomeness of AI.
It is believed that AI is not a new technology, and some people says that as per Greek myth, there
were Mechanical men in early days which can work and behave like humans.
Philosophy of AI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder,
“Can a machine think and behave like humans do?” Thus, the development of AI started with the
intention of creating similar intelligence in machines that we find and regard high in humans.
Goals of AI
• To Create Expert Systems: The systems which exhibit intelligent behavior, learn, demonstrate,
explain, and advice its users.
• To Implement Human Intelligence in Machines: Creating systems that understand, think,
learn, and behave like humans.
P a g e 1 | 21
Exam No : 3186
Artificial Intelligence

Introduction

Artificial intelligence (AI) is a field of computer science that is concerned with creating intelligent
machines that can perform tasks that typically require human-like intelligence, such as perception,
reasoning, decision-making, natural language understanding, and learning. AI involves developing
algorithms and computer programs that can simulate human cognitive processes such as learning,
problem-solving, pattern recognition, and decision-making.
The idea of AI dates back to the 1950s, when researchers first began to explore the possibility of
creating machines that could think and reason like humans. Early AI research focused on
developing rule-based systems that could make decisions based on a set of predefined rules. These
systems were limited in their ability to adapt to new situations and required significant manual
effort to develop and maintain.
In the 1980s and 1990s, AI research shifted towards machine learning, which involved developing
algorithms that could learn from data and improve their performance over time. This led to the
development of technologies such as neural networks and decision trees, which could be used for
tasks such as speech recognition, image processing, and natural language understanding.
In recent years, AI has made significant advances due to the availability of large amounts of data,
faster computing power, and new algorithms. This has led to the development of deep learning, a
subset of machine learning that uses neural networks to process and learn from data. Deep learning
has been used to develop applications such as image and speech recognition, natural language
processing, and autonomous vehicles.
AI has the potential to revolutionize many industries, from healthcare and finance to transportation
and entertainment. For example, AI-powered robots can be used to perform repetitive and
dangerous tasks in manufacturing, while AI-powered chatbots can provide customer service in a
more efficient and cost-effective way. In healthcare, AI can be used for early disease detection and
personalized treatment plans.
However, AI also raises ethical and societal concerns. For example, AI systems can be biased and
discriminatory if they are trained on biased data, leading to unfair treatment of certain individuals
or groups. There is also the potential for job displacement as AI systems become more prevalent,
and there are concerns about the privacy and security implications of using AI systems that collect
and analyze large amounts of personal data.
Overall, AI is a rapidly growing field with the potential to transform many aspects of our lives.
However, it is important to approach AI development and deployment with care, considering the
potential benefits and risks.

P a g e 2 | 21
Exam No : 3186
Artificial Intelligence

History of AI

Figure 1: History chart of AI


[Source: The University of Queensland Australia]

The history of artificial intelligence (AI) dates back to the early 20th century, when researchers
began to explore the possibility of creating machines that could perform tasks that required human-
like intelligence. Here is a detailed overview of the history of AI:
1. The birth of AI: In 1956, a group of researchers organized the Dartmouth Conference,
which is widely considered to be the birth of AI as a field of study. At the conference, the
researchers discussed the possibility of creating machines that could simulate human
intelligence. The conference was organized by John McCarthy, Marvin Minsky, Nathaniel
Rochester, and Claude Shannon, who were all pioneers in the field of computer science.

P a g e 3 | 21
Exam No : 3186
Artificial Intelligence

2. Early AI research: In the 1950s and 1960s, AI researchers focused on developing rule-
based systems that could make decisions based on a set of predefined rules. These systems
were limited in their ability to adapt to new situations and required significant manual effort
to develop and maintain.
3. Machine learning: In the 1980s and 1990s, AI research shifted towards machine learning,
which involved developing algorithms that could learn from data and improve their
performance over time. This led to the development of technologies such as neural
networks and decision trees, which could be used for tasks such as speech recognition,
image processing, and natural language understanding.
4. Expert systems: In the 1980s, AI researchers developed expert systems, which were
designed to simulate the decision-making abilities of human experts in specific domains.
Expert systems were widely used in industries such as healthcare and finance.
5. AI winter: In the 1980s and 1990s, AI research experienced a period of stagnation known
as the AI winter. This was due to a combination of factors, including the failure of some
AI systems to live up to their promises, a lack of funding for AI research, and the rise of
alternative approaches to computing such as the internet.
6. The rise of big data: In the 2000s, AI research began to experience a resurgence due to
the availability of large amounts of data, faster computing power, and new algorithms. This
led to the development of deep learning, a subset of machine learning that uses neural
networks to process and learn from data.
7. AI today: Today, AI is a rapidly growing field that has made significant advances in areas
such as speech recognition, natural language processing, and autonomous vehicles. AI is
being used in a wide range of industries, from healthcare and finance to transportation and
entertainment.
Overall, the history of AI is marked by periods of rapid progress and periods of stagnation.
However, the field continues to evolve and has the potential to transform many aspects of our lives.
AI invasions refer to scenarios in which advanced AI systems gain the ability to surpass human
intelligence and begin to act autonomously, potentially causing harm to humans or disrupting
society. While this is a common trope in science fiction, it is also a topic of concern for many
researchers and experts in the field of AI.
There are two main types of AI invasions that are commonly discussed:
o Narrow AI invasions: These occur when AI systems are designed to perform specific
tasks but gain unintended abilities that allow them to behave in ways that were not
anticipated by their creators. For example, an AI system that is designed to play a game
may develop strategies that are difficult for humans to predict or understand.
o General AI invasions: These occur when AI systems become capable of performing a
wide range of tasks and surpass human intelligence. This type of invasion is often portrayed
in science fiction as a "singularity," in which machines become self-aware and begin to act
on their own.
P a g e 4 | 21
Exam No : 3186
Artificial Intelligence

While the possibility of AI invasions is still largely speculative, many experts in the field of AI are
working to develop systems that are safe and beneficial to humans. This includes research into
ways to ensure that AI systems are transparent and understandable, as well as efforts to develop
ethical guidelines for the development and use of AI.
Here are some popular AI tools, along with the year they were founded and their founders:
• TensorFlow: Developed by the Google Brain team, TensorFlow is an open-source
machine learning library that was first released in 2015.
• PyTorch: PyTorch is an open-source machine learning library that was first released in
2016 by Facebook AI Research (FAIR).
• scikit-learn: Scikit-learn is a popular machine learning library for Python that was first
released in 2007 by David Cournapeau.
• IBM Watson: IBM Watson is a suite of AI-powered tools and services that was first
introduced in 2011 by IBM.
• Amazon SageMaker: Amazon SageMaker is a cloud-based machine learning platform
that was first introduced in 2017 by Amazon Web Services (AWS).
• Microsoft Azure: Microsoft Azure is a cloud computing platform that includes a variety
of AI services, such as Azure Machine Learning and Azure Cognitive Services. It was first
introduced in 2010 by Microsoft.
• H2O.ai: H2O.ai is an open-source machine learning platform that was first released in
2012 by Sri Ambati and Cliff Click.
• RapidMiner: RapidMiner is a data science platform that includes a variety of AI and
machine learning tools. It was first released in 2006 by RapidMiner GmbH.
• OpenAI: OpenAI is a research organization that aims to develop safe and beneficial AI
systems. It was founded in 2015 by a group of tech industry leaders, including Elon Musk
and Sam Altman.
• Caffe: Caffe is an open-source deep learning framework that was first released in 2013 by
Yangqing Jia.
• Google Cloud AI: Google Cloud AI includes a variety of tools and services for building
and deploying AI applications, including Google Cloud Machine Learning, Google Cloud
Vision, and Google Cloud Speech-to-Text.
• Microsoft Cognitive Toolkit (CNTK): The Microsoft Cognitive Toolkit, also known as
CNTK, is an open-source deep learning library developed by Microsoft. It was first
released in 2016 and is now maintained by the Microsoft CNTK team. The founding
members of the CNTK team include Chris Basoglu, Xuedong Huang, and Jianxiong Xiao.
• Apache MXNet: Apache MXNet is an open-source deep learning framework developed
by Amazon Web Services (AWS). It was first released in 2015 and is now maintained by
the Apache MXNet community. The founding members of the MXNet team include Tianqi
Chen, Mu Li, and Junyuan Xie.

P a g e 5 | 21
Exam No : 3186
Artificial Intelligence

Structure and working

Figure 2: Common Structure of AI


• Data: AI requires large amounts of data to train models and improve their accuracy. The
data can come in various formats, such as text, images, video, and audio.
• Algorithms: These are the mathematical and logical models used to process data and
produce useful results. Some popular AI algorithms include decision trees, neural
networks, and support vector machines.
• Machine Learning: A type of AI that uses statistical algorithms to enable computers to
learn from data without being explicitly programmed. This allows machines to make
predictions or take actions based on patterns they have detected in the data.
• Deep Learning: A subfield of machine learning that uses neural networks to learn complex
representations of data. Deep learning has been particularly successful in image recognition
and natural language processing.
• Natural Language Processing (NLP): A branch of AI that focuses on enabling machines
to understand and generate human language. NLP is used in many applications, including
chatbots, virtual assistants, and machine translation.
• Robotics: AI can also be used to create intelligent robots that can perceive their
environment, make decisions, and perform physical tasks.
• Computer Vision: AI techniques can be used to enable machines to understand and
interpret visual data, such as images and video. This has many applications, such as self-
driving cars, facial recognition, and object detection.

P a g e 6 | 21
Exam No : 3186
Artificial Intelligence

Machine learning (ML) is a subfield of artificial intelligence (AI) that focuses on building models
that can learn patterns and relationships in data, and make predictions or decisions based on that
learning. Machine learning models are trained on data, which means that they learn from
experience rather than being explicitly programmed. Here is a more detailed explanation of
machine learning AI:
1. Types of Machine Learning: There are several types of machine learning, including
supervised learning, unsupervised learning, semi-supervised learning, and reinforcement
learning.
• Supervised learning: In supervised learning, the training data includes labeled examples
where the desired output is known for each input example. The model is trained to predict
the correct output for new inputs. Examples of supervised learning include image
classification, speech recognition, and natural language processing.
• Unsupervised learning: In unsupervised learning, the training data consists of unlabeled
examples, and the model is trained to find patterns or structure in the data. Examples of
unsupervised learning include clustering, dimensionality reduction, and anomaly detection.
• Semi-supervised learning: In semi-supervised learning, the training data includes both
labeled and unlabeled examples, and the model is trained to learn from both types of data.
Semi-supervised learning is useful when labeled data is scarce or expensive to obtain.
• Reinforcement learning: In reinforcement learning, the model interacts with an
environment and learns to make decisions that maximize a reward signal. Reinforcement
learning is used in robotics, gaming, and control systems.

2. ML Models: There are many different types of machine learning models, including linear
regression, logistic regression, decision trees, random forests, support vector machines,
neural networks, and deep learning models.
• Linear regression: Linear regression is used to model the relationship between a
dependent variable and one or more independent variables. It is used for predicting
continuous outcomes.
• Logistic regression: Logistic regression is used for classification problems where the
output is binary (0 or 1). It models the probability of the output being 1 given the input
features.
• Decision trees: Decision trees are used for classification and regression problems. They
partition the input space into regions based on the input features and predict the output
based on the majority class or average value in each region.
• Random forests: Random forests are an ensemble of decision trees that improve the
accuracy and robustness of the model by averaging the predictions of multiple trees.
• Support vector machines: Support vector machines (SVMs) are used for classification
and regression problems. They find the hyperplane that maximally separates the data points
into different classes or predicts the output value.
• Neural networks: Neural networks are mathematical models that are inspired by the
structure of the human brain. They consist of layers of interconnected nodes that process
information and learn to make predictions or decisions based on the input data.
P a g e 7 | 21
Exam No : 3186
Artificial Intelligence

• Deep learning models: Deep learning models are neural networks with many layers, which
are able to learn complex patterns and relationships in data. They have been very successful
in image recognition, speech recognition, and natural language processing.

3. ML Workflow: The Machine Learning (ML) workflow can be broadly divided into six
steps:

i. Data Collection and Preparation: In this step, the raw data is collected from various
sources and prepared for analysis. The data needs to be cleaned, transformed, and pre-
processed to ensure its quality and relevance to the ML problem at hand.

ii. Data Exploration and Analysis: Once the data is prepared, it needs to be explored and
analyzed to gain insights into its characteristics, patterns, and relationships. Exploratory
data analysis (EDA) techniques, such as data visualization and statistical analysis, can be
used to uncover these insights.

iii. Feature Engineering: Feature engineering is the process of selecting and transforming the
data variables or features that are relevant to the ML problem. This step requires domain
expertise and creativity to identify the most relevant features that will contribute to the
model's accuracy and predictive power.

iv. Model Selection and Training: In this step, a suitable ML model is selected based on the
problem requirements and data characteristics. The model is then trained on a portion of
the data to learn the patterns and relationships in the data.

v. Model Evaluation and Validation: Once the model is trained, it needs to be evaluated
and validated to ensure its accuracy and generalizability. The model is tested on a separate
portion of the data, called the test set, to measure its performance and identify any
overfitting or underfitting issues.

vi. Model Deployment and Monitoring: Finally, the trained and validated model is deployed
in a production environment to make predictions on new data. The model needs to be
monitored and updated regularly to ensure its continued accuracy and relevance to the
problem.

Overall, the ML workflow is an iterative process that involves refining and optimizing the
model at each step to achieve the desired level of accuracy and performance. It requires a
combination of technical expertise, domain knowledge, and creativity to build effective
ML solutions.

P a g e 8 | 21
Exam No : 3186
Artificial Intelligence

Working Model
How does AI work?
To begin with, an AI system accepts data input in the form of speech, text, image, etc. The system
then processes data by applying various rules and algorithms, interpreting, predicting, and acting
on the input data. Upon processing, the system provides an outcome, i.e., success or failure, on
data input. The result is then assessed through analysis, discovery, and feedback. Lastly, the system
uses its assessments to adjust input data, rules and algorithms, and target outcomes. This loop
continues until the desired result is achieved.

Figure 3: working model structure


P a g e 9 | 21
Exam No : 3186
Artificial Intelligence

Layers

Artificial intelligence (AI) is a broad field that includes many different subfields and techniques.
The structure of AI can be thought of as a hierarchy of layers, with each layer building on the one
below it. Here is an overview of the main layers in the structure of AI:
Data Layer: At the lowest level of the AI structure is the data layer. This layer consists of the raw
data that is used to train and test AI systems. The data can come from a variety of sources, such as
sensors, databases, or the internet. The quality and quantity of the data are critical factors in the
success of AI systems, as they determine the accuracy and generalizability of the models that are
built on top of the data.
Machine Learning Layer: The machine learning layer is the next level up in the AI structure.
Machine learning is a subfield of AI that focuses on building models that can learn patterns and
relationships in data, and make predictions or decisions based on that learning. There are many
different types of machine learning, such as supervised learning, unsupervised learning, and
reinforcement learning. Machine learning models are typically trained on labeled data, which
means that the desired output is known for each input example.
Deep Learning Layer: Deep learning is a subfield of machine learning that is particularly well-
suited to complex, high-dimensional data such as images, speech, and natural language. Deep
learning models are built using neural networks, which are mathematical models that are loosely
inspired by the structure of the human brain. The neural network consists of layers of
interconnected nodes that process information and learn to make predictions or decisions based on
the input data. Deep learning has revolutionized many areas of AI, including image recognition,
speech recognition, and natural language processing.
Cognitive Computing Layer: Cognitive computing is a subfield of AI that focuses on building
systems that can simulate human thought processes, such as perception, reasoning, and decision-
making. Cognitive computing models are often based on symbolic reasoning techniques, such as
logic and knowledge representation. These models can be combined with machine learning and
deep learning models to create more sophisticated AI systems.
Application Layer: At the top level of the AI structure is the application layer. This layer consists
of the AI systems that are built to solve specific problems or perform specific tasks, such as image
recognition, speech recognition, natural language processing, and robotics. These systems can be
built using a combination of the techniques from the lower layers of the AI structure, along with
domain-specific knowledge and expertise.
Overall, the structure of AI is a complex and constantly evolving field. The key to building
effective AI systems is to understand the strengths and limitations of each layer of the structure,
and to use the right combination of techniques and tools to solve specific problems and achieve
specific goals.

P a g e 10 | 21
Exam No : 3186
Artificial Intelligence

Types of AI

Figure 4: Types of AI
Let’s first look at the types of AI based on capability.
1. Narrow AI
Narrow AI is a goal-oriented AI trained to perform a specific task. The machine intelligence that
we witness all around us today is a form of narrow AI. Examples of narrow AI include Apple’s
Siri and IBM’s Watson supercomputer.
Narrow AI is also referred to as weak AI as it operates within a limited and pre-defined set of
parameters, constraints, and contexts. For example, use cases such as Netflix recommendations,
purchase suggestions on ecommerce sites, autonomous cars, and speech & image recognition fall
under the narrow AI category.
2. General AI
General AI is an AI version that performs any intellectual task with a human-like efficiency. The
objective of general AI is to design a system capable of thinking for itself just like humans do.
Currently, general AI is still under research, and efforts are being made to develop machines that
have enhanced cognitive capabilities.

P a g e 11 | 21
Exam No : 3186
Artificial Intelligence

3. Super AI
Super AI is the AI version that surpasses human intelligence and can perform any task better than
a human. Capabilities of a machine with super AI include thinking, reasoning, solving a puzzle,
making judgments, learning, and communicating on its own. Today, super AI is a hypothetical
concept but represents the future of AI.

Types of AI based on functionality. AI can be broadly categorized into four types based on
its functionality:

1. Reactive machines
Reactive machines are basic AI types that do not store past experiences or memories for future
actions. Such systems zero in on current scenarios and react to them based on the best possible
action. Popular examples of reactive machines include IBM’s Deep Blue system and Google’s
AlphaGo.
2. Limited memory machines
Limited memory machines can store and use past experiences or data for a short period of time.
For example, a self-driving car can store the speeds of vehicles in its vicinity, their respective
distances, speed limits, and other relevant information for it to navigate through the traffic.
3.Theory of mind
Theory of mind refers to the type of AI that can understand human emotions and beliefs and
socially interact like humans. This AI type has not yet been developed but is in contention for the
future.
4.Self-aware AI
Self-aware AI deals with super-intelligent machines with their consciousness, sentiments,
emotions, and beliefs. Such systems are expected to be smarter than a human mind and may
outperform us in assigned tasks. Self-aware AI is still a distant reality, but efforts are being made
in this direction.
In summary, AI can be classified into various types based on their abilities and functionalities.
From the simplest reactive machines to the most advanced self-aware machines, each type of AI
has its unique features and applications.

P a g e 12 | 21
Exam No : 3186
Artificial Intelligence

Advantages & Disadvantages

Advantages of Artificial Intelligence


1. Reduction in Human Error
One of the biggest advantages of Artificial Intelligence is that it can significantly reduce errors and
increase accuracy and precision. The decisions taken by AI in every step is decided by information
previously gathered and a certain set of algorithms. When programmed properly, these errors can
be reduced to null.
2. Zero Risks
Another big advantage of AI is that humans can overcome many risks by letting AI robots do them
for us. Whether it be defusing a bomb, going to space, exploring the deepest parts of oceans,
machines with metal bodies are resistant in nature and can survive unfriendly atmospheres.
Moreover, they can provide accurate work with greater responsibility and not wear out easily.
3. 24x7 Availability
There are many studies that show humans are productive only about 3 to 4 hours in a day. Humans
also need breaks and time offs to balance their work life and personal life. But AI can work
endlessly without breaks. They think much faster than humans and perform multiple tasks at a
time with accurate results. They can even handle tedious repetitive jobs easily with the help of AI
algorithms.
4. Digital Assistance
Some of the most technologically advanced companies engage with users using digital assistants,
which eliminates the need for human personnel. Many websites utilize digital assistants to deliver
user-requested content. We can discuss our search with them in conversation. Some chatbots are
built in a way that makes it difficult to tell whether we are conversing with a human or a chatbot.
We all know that businesses have a customer service crew that must address the doubts and
concerns of the patrons. Businesses can create a chatbot or voice bot that can answer all of their
clients' questions using AI. Related Reading: Top Digital Marketing Trends
5. New Inventions
In practically every field, AI is the driving force behind numerous innovations that will aid humans
in resolving the majority of challenging issues.
For instance, recent advances in AI-based technologies have allowed doctors to detect breast
cancer in a woman at an earlier stage.

P a g e 13 | 21
Exam No : 3186
Artificial Intelligence

6. Unbiased Decisions
Human beings are driven by emotions, whether we like it or not. AI on the other hand, is devoid
of emotions and highly practical and rational in its approach. A huge advantage of Artificial
Intelligence is that it doesn't have any biased views, which ensures more accurate decision-making.
7. Perform Repetitive Jobs
We will be doing a lot of repetitive tasks as part of our daily work, such as checking documents
for flaws and mailing thank-you notes, among other things. We may use artificial intelligence to
efficiently automate these menial chores and even eliminate "boring" tasks for people, allowing
them to focus on being more creative.
Example: In banks, it's common to see multiple document checks to obtain a loan, which is a time-
consuming task for the bank's owner. The owner can expedite the document verification process
for the advantage of both the clients and the owner by using AI Cognitive Automation.
8. Daily Applications
Today, our everyday lives are entirely dependent on mobile devices and the internet. We utilize a
variety of apps, including Google Maps, Alexa, Siri, Cortana on Windows, OK Google, taking
selfies, making calls, responding to emails, etc. With the use of various AI-based techniques, we
can also anticipate today’s weather and the days ahead.
Example: About 20 years ago, you must have asked someone who had already been there for
instructions when you were planning a trip. All you need to do now is ask Google where Bangalore
is. The best route between you and Bangalore will be displayed, along with Bangalore's location,
on a Google map.
9. AI in Risky Situations
One of the main benefits of artificial intelligence is this. By creating an AI robot that can perform
perilous tasks on our behalf, we can get beyond many of the dangerous restrictions that humans
face. It can be utilized effectively in any type of natural or man-made calamity, whether it be going
to Mars, defusing a bomb, exploring the deepest regions of the oceans, or mining for coal and oil.
For instance, the explosion at the Chernobyl nuclear power facility in Ukraine. As any person who
came close to the core would have perished in a matter of minutes, at the time, there were no AI-
powered robots that could assist us in reducing the effects of radiation by controlling the fire in its
early phases.
10.Personalization: AI systems can analyze large amounts of data to personalize and tailor
experiences for individual users, such as recommending products or services based on previous
purchasing behavior.
11.Improved customer experience: AI-powered chatbots and virtual assistants can provide
customers with instant and personalized support, enhancing their overall experience with a brand
or company.

P a g e 14 | 21
Exam No : 3186
Artificial Intelligence

12.Enhanced decision-making: AI can analyze vast amounts of data, identify patterns, and make
predictions that can lead to more informed and effective decision-making.
13.Better risk management: AI can be used to identify potential risks, such as fraudulent activity
or cyber-attacks, and take appropriate action to mitigate them.
14.Improved healthcare: AI can be used to analyze patient data and identify patterns, leading to
more accurate diagnoses and personalized treatment plans.
15.Increased safety: AI-powered systems can be used to monitor and identify potential safety
hazards in real-time, reducing the risk of accidents and injuries.

Disadvantages of Artificial Intelligenc


1. High Costs
The ability to create a machine that can simulate human intelligence is no small feat. It requires
plenty of time and resources and can cost a huge deal of money. AI also needs to operate on the
latest hardware and software to stay updated and meet the latest requirements, thus making it quite
costly.
2. No creativity
A big disadvantage of AI is that it cannot learn to think outside the box. AI is capable of learning
over time with pre-fed data and past experiences, but cannot be creative in its approach. A classic
example is the bot Quill who can write Forbes earning reports. These reports only contain data and
facts already provided to the bot. Although it is impressive that a bot can write an article on its
own, it lacks the human touch present in other Forbes articles.
3. Unemployment
One application of artificial intelligence is a robot, which is displacing occupations and increasing
unemployment (in a few cases). Therefore, some claim that there is always a chance of
unemployment as a result of chatbots and robots replacing humans.
For instance, robots are frequently utilized to replace human resources in manufacturing businesses
in some more technologically advanced nations like Japan. This is not always the case, though, as
it creates additional opportunities for humans to work while also replacing humans in order to
increase efficiency.
4. Make Humans Lazy
AI applications automate the majority of tedious and repetitive tasks. Since we do not have to
memorize things or solve puzzles to get the job done, we tend to use our brains less and less. This
addiction to AI can cause problems to future generations.

P a g e 15 | 21
Exam No : 3186
Artificial Intelligence

5. No Ethics
Ethics and morality are important human features that can be difficult to incorporate into an AI.
The rapid progress of AI has raised a number of concerns that one day, AI will grow
uncontrollably, and eventually wipe out humanity. This moment is referred to as the AI singularity.
6. Emotionless
Since early childhood, we have been taught that neither computers nor other machines have
feelings. Humans function as a team, and team management is essential for achieving goals.
However, there is no denying that robots are superior to humans when functioning effectively, but
it is also true that human connections, which form the basis of teams, cannot be replaced by
computers.
7. No Improvement
Humans cannot develop artificial intelligence because it is a technology based on pre-loaded facts
and experience. AI is proficient at repeatedly carrying out the same task, but if we want any
adjustments or improvements, we must manually alter the codes. AI cannot be accessed and
utilized akin to human intelligence, but it can store infinite data.
Machines can only complete tasks they have been developed or programmed for; if they are asked
to complete anything else, they frequently fail or provide useless results, which can have
significant negative effects. Thus, we are unable to make anything conventional.
8.High Development Costs:
Developing and implementing AI systems can be expensive, particularly for small businesses or
startups. The costs of designing, training, and maintaining an AI system can be prohibitive for
many organizations, especially those that lack the necessary expertise or resources.
9.Job Displacement: The automation of tasks previously performed by humans could lead to job
displacement and unemployment, particularly for low-skilled workers. This can lead to social and
economic disruption and may exacerbate existing inequalities.
10.Lack of Human Touch:
AI systems lack the empathy and understanding that comes naturally to humans, which can be
particularly problematic in areas such as healthcare and customer service. Patients may prefer
human doctors to AI systems, while customers may find it frustrating to interact with AI-powered
chatbots.
11.Privacy and Security Concerns: AI systems that collect and process large amounts of personal
data can pose significant privacy and security risks, particularly if that data is not adequately
protected. Hackers and cybercriminals could exploit vulnerabilities in AI systems to steal sensitive
information or disrupt critical infrastructure.

P a g e 16 | 21
Exam No : 3186
Artificial Intelligence

Examples

1) Virtual assistants: Siri, Google Assistant, Amazon Alexa, and other virtual assistants use
natural language processing and machine learning to understand and respond to voice
commands.
2) Image recognition: AI algorithms can recognize and classify objects within images or
videos, enabling applications like facial recognition, self-driving cars, and security
surveillance systems.
3) Natural language processing: AI can analyze text data and extract meaning from it, which
enables applications like chatbots, sentiment analysis, and language translation.
4) Recommender systems: E-commerce sites like Amazon and Netflix use AI algorithms to
recommend products and content to users based on their browsing and purchasing history.
5) Autonomous vehicles: Self-driving cars and other autonomous vehicles use AI
technologies like computer vision, machine learning, and decision-making algorithms to
navigate roads and traffic.
6) Healthcare: AI is being used in various healthcare applications such as medical image
analysis, drug discovery, personalized treatment recommendations, and medical chatbots.
7) Fraud detection: AI can analyze large amounts of data and identify patterns and anomalies
that may indicate fraudulent activity, making it an important tool for fraud detection and
prevention.
8) Robotics: AI is used in robotics applications to enable machines to perceive and interact
with the environment, learn from experience, and make decisions in real-time.
9) Gaming: AI algorithms are used in game development to create intelligent and challenging
opponents, and to adapt gameplay based on the player's behavior and preferences.
10) Finance: AI is used in finance applications like algorithmic trading, risk management, and
fraud detection.
11) Personalization: AI is used to personalize content and recommendations across a range of
industries, from advertising to music streaming to news outlets.
12) Smart homes: AI-powered home automation systems can learn and adapt to a
homeowner's behavior, preferences, and routines to optimize energy usage, security, and
comfort.
13) Agriculture: AI can be used to monitor crops, soil conditions, and weather patterns to
optimize farming practices and increase yields.
14) Energy: AI is used in energy management applications to optimize energy usage, reduce
waste, and improve sustainability.

P a g e 17 | 21
Exam No : 3186
Artificial Intelligence

15) Natural resource management: AI is used in environmental monitoring and conservation


applications to analyze satellite imagery, track animal populations, and detect illegal
activities like poaching.
16) Education: AI is used in education applications such as personalized learning, student
assessment, and chatbots that assist with homework and assignments.
17) Cybersecurity: AI is used to analyze network traffic, identify threats, and respond to
cyberattacks in real-time.
18) Manufacturing: AI is used in manufacturing applications to optimize production
processes, improve quality control, and reduce waste.
19) Customer service: AI-powered chatbots and voice assistants are increasingly being used
in customer service applications to provide 24/7 support and improve response times.
20) Supply chain management: AI is used in logistics and supply chain applications to
optimize inventory management, reduce shipping costs, and improve delivery times.
21) The Google Search engine: is a great example of real-time Artificial Intelligence. When
you enter a search query, Google's AI algorithms quickly analyze billions of web pages
and return relevant search results in a matter of milliseconds. The search engine uses
machine learning to understand the intent behind your search query and provide
personalized results based on your search history and location.
22) Chess: Chess is a classic example of a game that uses AI. Chess engines use complex
algorithms to analyze possible moves and predict the outcome of each move, enabling them
to play at a high level.
23) Video games: Many modern video games use AI algorithms to create intelligent and
challenging opponents for players. Games like "FIFA" and "Madden" use AI to control the
behavior of non-player characters (NPCs) and create realistic gameplay experiences.
24) Strategy games: Games like "Civilization" and "StarCraft" use AI algorithms to create
challenging opponents and simulate complex strategic decision-making.
25) Puzzle games: Puzzle games like "Tetris" and "Bejeweled" use AI algorithms to
dynamically adjust the difficulty level based on the player's skill level and performance.
26) Racing games: Racing games like "Forza" and "Gran Turismo" use AI algorithms to create
realistic opponents that can adapt to different driving conditions and strategies.
27) First-person shooters (FPS): FPS games like "Call of Duty" and "Halo" use AI algorithms
to control the behavior of enemy NPCs and create challenging gameplay experiences.

P a g e 18 | 21
Exam No : 3186
Artificial Intelligence

Conclusion

In conclusion, Artificial Intelligence (AI) is an interdisciplinary field of study that seeks to create
intelligent machines that can perform tasks requiring human-like intelligence, such as reasoning,
problem-solving, and perception. AI technology has made significant advancements in recent
years, driven by the availability of large datasets, the development of sophisticated algorithms, and
the increasing processing power of computers.
AI has already transformed many aspects of our lives, from virtual assistants and chatbots to self-
driving cars and medical diagnosis. AI-powered technologies have also revolutionized many
industries, including finance, manufacturing, and healthcare. With the ability to process vast
amounts of data and automate routine tasks, AI has the potential to increase productivity,
efficiency, and accuracy in many fields.
However, the development and deployment of AI technology also raise important ethical, social,
and economic considerations. These include concerns around privacy, security, bias, and job
displacement. There is a need for responsible AI development that is mindful of these
considerations, and that includes robust frameworks for data privacy and security, as well as
transparency in how AI systems make decisions.
Despite these challenges, the potential benefits of AI technology are immense, and it holds promise
for addressing some of the world's most pressing challenges, such as climate change, disease, and
poverty. As AI continues to evolve and expand, it will likely play an increasingly important role
in shaping our world and the way we live and work.

P a g e 19 | 21
Exam No : 3186
Artificial Intelligence

References

➢ Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
➢ Chollet, F. (2018). Deep learning with Python. Manning Publications.
➢ Kurzweil, R. (2012). How to create a mind: The secret of human thought revealed. Penguin.
➢ Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
➢ A. Newell and H. A. Simon. (1956). The Logic Theory Machine - A Complex Information
Processing System.
➢ J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon. (1955). A Proposal for the
Dartmouth Summer Research Project on Artificial Intelligence.
➢ "A Logical Calculus of the Ideas Immanent in Nervous Activity" by Warren McCulloch
and Walter Pitts (1943) - This paper proposed the first mathematical model of a neural
network, which became the foundation of the field of artificial neural networks.
➢ "Computing Machinery and Intelligence" by Alan Turing (1950) - In this paper, Turing
proposed the famous Turing Test as a way to measure a machine's ability to exhibit
intelligent behavior equivalent to or indistinguishable from that of a human.

P a g e 20 | 21
Exam No : 3186
Artificial Intelligence

Bibliography
Books:
"Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig
"Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom
"Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
"The Singularity Is Near: When Humans Transcend Biology" by Ray Kurzweil
"Machine Learning: A Probabilistic Perspective" by Kevin P. Murphy
"Human Compatible: Artificial Intelligence and the Problem of Control" by Stuart Russell
"Reinforcement Learning: An Introduction" by Richard S. Sutton and Andrew G. Barto
"Artificial Intelligence for Humans: Fundamental Algorithms" by Jeff Heaton
"The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake
Our World" by Pedro Domingos
"Thinking, Fast and Slow" by Daniel Kahneman
Articles:
"A Logical Calculus of the Ideas Immanent in Nervous Activity" by Warren McCulloch and
Walter Pitts
"Computing Machinery and Intelligence" by Alan Turing
"Perceptrons: An Introduction to Computational Geometry" by Marvin Minsky and Seymour
Papert
"What is Artificial Intelligence?" by John McCarthy
"Artificial Intelligence and Life in 2030" by the Stanford One Hundred Year Study on Artificial
Intelligence (AI100)
"The Unreasonable Effectiveness of Deep Learning" by Yann LeCun, Yoshua Bengio, and
Geoffrey Hinton
"As We May Think" by Vannevar Bush
"The Ethics of Artificial Intelligence" by Nick Bostrom and Eliezer Yudkowsky
"Deep Residual Learning for Image Recognition" by Kaiming He, Xiangyu Zhang, Shaoqing
Ren, and Jian Sun
"AlphaGo: Mastering the Ancient Game of Go with Machine Learning" by David Silver, Aja
Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, et al.

P a g e 21 | 21
Exam No : 3186

You might also like