0% found this document useful (0 votes)
9 views3 pages

Intro To AI

Artificial Intelligence (AI) is the capability of machines to learn from data and make predictions, enhancing human decision-making through various applications like language understanding and image recognition. The evolution of AI began with Turing's work in the 1950s, leading to advancements in narrow and broad AI, as well as cognitive computing. While AI offers benefits such as increased efficiency and innovation, it also poses challenges including job displacement, ethical concerns, and data privacy issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views3 pages

Intro To AI

Artificial Intelligence (AI) is the capability of machines to learn from data and make predictions, enhancing human decision-making through various applications like language understanding and image recognition. The evolution of AI began with Turing's work in the 1950s, leading to advancements in narrow and broad AI, as well as cognitive computing. While AI offers benefits such as increased efficiency and innovation, it also poses challenges including job displacement, ethical concerns, and data privacy issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

What is AI?

 Artificial intelligence refers to the ability of a machine to learn patterns from data and make
predictions, combining computer science and robust datasets to enable problem-solving
that augments human judgment.

 Everyday capabilities include understanding language (e.g., voice assistants), recognizing


images, making predictions, playing games, and assisting in self-driving tasks.

 Not AI: traditional rule-based systems, simple automation, fixed-function hardware, purely
mechanical devices, non-interactive systems, and basic sensors that do not learn.

Evolution of AI

 1950: Alan Turing releases his famous paper, "Computing Machinery and Intelligence." In
this paper, Turing proposed a thought experiment called the "imitation game" (Turing test).
 1956: the Dartmouth Conference was organized by McCarthy. He coined the term ‘Artificial
Intelligence’.
 1960s-70s: development of expert systems, early neural networks, and problem-solving
techniques.
 21st century: Resurgence of interest and progress in AI with advancements in computing
power, data availability, and algorithmic innovation.

Types of AI

 Narrow AI: focused on single tasks (e.g., recommendations, voice assistants) and highly
effective without a broad understanding beyond the trained task.

 Broad AI: more versatile than narrow AI, handling a wider range of related tasks within
domain-specific contexts and business processes.

 General AI: hypothetical systems matching human intellectual breadth; today’s AI lacks
human-like abstraction, strategizing, and creativity, and Artificial Superintelligence remains
speculative.

Domains of AI

 Data Science: works with numerical, alphabetical, and alphanumeric data to collect, analyze,
and visualize patterns using statistics and ML; data types include structured, unstructured,
and semi-structured.

 Computer Vision: computers are given the ability to see using images. Computers can decide
on what they see.

 Natural Language Processing (NLP): enables computers to understand, interpret, and


generate human language in text and speech for tasks like translation, summarization, and
speech recognition.
Data Types quick view

 Structured data: tabular, row/column format (e.g., names, dates); easy to analyze.

 Unstructured data: no fixed schema (e.g., text, images, comments); requires specialized
techniques.

 Semi-structured data: uses metadata to organize parts (e.g., social video with hashtags).

How do pixels help computers see

 Each image consists of pixels that contain information on color and intensity. These pixels
are converted into a series of numbers, which are understood by the computer through
mathematical processing.

NLP, NLU, NLG

 NLP: umbrella for how computers interact with human language, “what” language tasks
systems can perform.

 NLU: understanding meaning, extracting information, intent, and sentiment,


“comprehension” side.

 NLG: generating coherent language from structured data, “production” side.

Cognitive computing

 Cognitive computing mimics perception, learning, and reasoning to enhance decision-


making, integrating ML, reasoning, NLP, and computer vision to support human cognition.

 It aims to interact naturally with humans and improve human decision-making rather than
replace it, with examples including IBM Watson, DeepMind, and Microsoft Cognitive
Services.

Key AI terminologies

 Machine Learning (ML): algorithms that learn from data to make predictions/decisions
without explicit programming of rules.

 Deep Learning (DL): multi-layer neural networks inspired by brain neurons that learn
hierarchical representations; able to mimic the human brain in processing data and creation
of patterns.

 Neural networks: stacked node layers (input, hidden, output) with activation thresholds,
where networks with more than three layers (including input/output) are called deep neural
networks. When one node is activated, data goes to the next layer.
Types of machine learning

 Supervised learning: learns from labeled input-output pairs to map inputs to outputs;
examples include linear/logistic regression, decision trees, SVMs, and neural networks. The
goal of supervised learning is to learn a mapping function from input variables to output
variables.

 Unsupervised learning: discovers hidden patterns in unlabeled data like clusters or


associations; examples include k-means, hierarchical clustering, PCA, and autoencoders. The
goal of unsupervised learning is to explore and discover inherent structures.

 Reinforcement learning: agents learn by interacting with an environment to maximize


cumulative reward using feedback signals (rewards/penalties); examples include Q-learning,
DQN, policy gradients, and actor–critic. The goal of reinforcement learning is to learn a
policy or strategy that guides the agent to take actions that lead to the highest cumulative
reward over time.

Illustrative example from the text

 Rule-based sorting of grocery labels uses fixed if–else logic and is not learning-based.

 ML improves sorting by learning from examples (e.g., size, shape, color) and reducing errors
through iterative training and parameter tuning.

 DL removes manual feature design, learning implicit representations from images across
layers to classify items end-to-end.

Benefits of AI

 Efficiency and productivity: automation, faster analysis, and process optimization across
sectors.

 Better decision-making: pattern discovery in large datasets supports data-driven choices.

 Innovation: offloading repetitive tasks frees humans to focus on creativity and problem-
solving.

 Science and healthcare: accelerates drug discovery, diagnostics, and personalization.

Limitations of AI

 Job displacement: automation pressures necessitate reskilling and upskilling.

 Ethical concerns: bias, surveillance, manipulation, and the need for guardrails.

 Explainability: opaque models hinder understanding of how outputs are produced.

 Data privacy/security: large-scale data collection introduces vulnerabilities and trust


challenges.

You might also like