Week 1 Transcript
Week 1 Transcript
That has been a lot of buzz about AI. There's also been a lot of skepticism about AI,
like this cartoon essentially suggests, right? So, there are two pitches in this life, one to
your left and one right. And in the left side, the machine essentially says, I wonder
whose job I'll actually take first on the right. You see both the characters and the
machine interacting with each other. The character essentially says that you could
never do my job, whereas the machine actually says that I'm actually doing it right now.
So, there is a lot of buzz about AI and a lot of skepticism. This skepticism is essentially
around whether AI would wipe out a bunch of jobs, right? Like any other technologies
of automation. If one needs to put this debate in perspective, one needs to understand
why AI is probably different from other technological breakthroughs that we have seen
in the past. So, the purpose of what I'm going to talk through right now is to give you a
non-technical introduction about AI, as essentially to enable you to think through how
AI might actually be very, very different from other forms of technological
breakthroughs that we have seen in the past.
So, the agenda of this modern is to give you a historical perspective of AI and
explained why AI might be different from other breakthroughs that we might have seen
in the past. The historical perspective is important simply to tell you why the current
side of technologies that we classify as might be very, very different from what we
might have seen in the past. Let me start by talking about what AI is, right? AI is a set
of algorithms that can typically perform tasks that require human intelligence. Right?
So, human beings are intelligent. We can make connections between multiple things,
what we typically call as cognition, right? So, with these set of algorithms, we might be
getting very close to human intelligence itself. This idea of cognition and maybe that is
what differentiates AI from a whole bunch of different technologies or the
breakthroughs that we might have actually seen in the past. Broadly, it consists of two
sets of algorithms; it is called, machine learning and deep learning. So, I will take you
through what these mean gradually, as we make progress in this course.
ISB 1
emulate the decision making ability of human expert, mainly using if-then rules, right.
So, the entire problem, for example, the Towers of Hanoi, is encapsulated into a bunch
of if-then rules which then is fed into the machine, which then makes decisions about
how to solve the Towers of Hanoi. But the minute there is a problem, at the-the premise
behind expert systems was there any problem that human beings, solve using their
intelligence can be encapsulated using if-then loops which can subsequently be used,
just like human beings decide to solve problems that human beings typically solve.
You might want to think very hard about this premise, especially the aspect that
whether all problems that human beings actually deal with using intelligence can be
ISB 2
encapsulated in the form of if-then statements. So, how is modern AI different from
these experts statements? For that I have to give you a little bit of a background, about
when expert systems will likely perform well versus not as well, right.
Expert systems perform well when, of course, these phenomena can be formulised
using this if-if and then rules, right. For example, lot of us would have heard about the
Deep Blue chess playing machine which beat, Garry Kasparov, which famously beat
Kasparov in 1997. what did it do? It processed about 200 million possible moves per
second. And what did it do? It, these were a bunch of these, if-then loops right, which
were used to figure out the optimal move, by looking at 20 moves ahead using some
sort of a tree search, right. For people who are technically minded, you probably know
what tree search is. But, the critical thing to remember over here is that the entire game
of chess was encapsulated in the form of if-then rules. And this is in some ways, these
if-then rules were applied to figure out the optimal move, optimal move over 20 moves
ahead using some kind of a tree search, algorithm. And think about what we do or
what we use AI today for, right. Typically, we use AI today for a whole bunch of tasks
which probably cannot be encapsulated using these if-then rules. For example, what
about facial recognition. Can facial recognition be encapsulated using if-then rules?
What about tax matching, right? Lot of us use Alexa and Google and so on and so
forth, right? So, do these algorithms, can these algorithms be programmed using if-
then loops? Think about it for a minute. I'm going to ask Alexa to do something. Can
Alexa, you know, can the entire conversation that I might have not just today, if way
ahead in the future, be encapsulated in the form of if-then rules? Probably, not. Right
so, this in some ways was the limitation of expert systems. The idea, the premise that
any kind of problems that require human intelligence can be encapsulated in the form
of if-then rule itself was a difficult premise, right, that essentially constrained the
algorithm to do a whole bunch of tasks that human beings essentially try, try and do.
Because all kinds of processes or all kinds of things that human beings do cannot be
encapsulated in an if-then loop, expert systems hence, may not be true intelligence.
Therefore, how is modern AI different from these expert systems that were actually
invented a few decades back?
Many modern AI tools right now are based on this idea of pattern matching, right. So,
this whole idea of attribution. So, you might want to think about what is so different
about human intelligence. Human intelligence is actually different simply because
human beings make lol of attribution, right. We tend to attribute different things so if
I've seen essentially a face and I know a name, by attribution I can actually put the
face to a name, right? That's is- that is one aspect of human intelligence. And how do
we do it? That happens by pattern matching, right. So, this whole idea of attribution.
So many, just like that part of human intelligence, many of the modern AI tools are
also based on this idea of attribution or pattern matching. Some of the algorithms, like
we'll speak about later, also replicates have neurons in the brain actually work. Right
so, what are the neurons actually doing? There are parts in the brain or neurons that
essentially does this attribution for you, right. So, some of these algorithms like neural
networks that we will speak about, essentially replicate this kind of intelligence that
human beings are essentially blessed with. Comprises of algorithms that do these
kinds of cognitive functions. Right so, modern AI is a little bit different from expert
ISB 3
system simply because the modern AI algorithms incorporate this kind of cognition
which is essentially based on pattern matching, right. And human beings just don't
stop with that, right. Human beings not only attribute, learn by a process of attribution,
but they also tend to make predictions using attribution. If we know that the-that the
clouds, there are clouds, grey clouds, low clouds, you know, we make this probability
or prediction that it is likely rain, right? So, how does that happen? We might have
seen the same pattern in the past and we use that kind of experience to predict
whether it's going to rain or not, conditional on seeing grey clouds and low clouds and
so on and so forth, right. That that's an aspect that we do which is an aspect of
prediction based on what? Based on our prior experience and you can call that
learning as well. So, we learn from experience and we use that kind of experience to
make predictions when we see a certain things, right. So, this whole idea of attribution
can also be extended to prediction through learning. Same way, the machines can do
the same thing as well.
Modern AI algorithms can do the same thing as well, right. So, this whole idea of
attribution and prediction to learning are two key differences that make these
algorithms very different from expert systems. And more importantly from the
perspective of this conversation, it also makes it very close to human intelligence,
right? So, it's very different from any of the other technological breakthroughs that we
have seen in the past. The big aha about AI hence, is this idea that we, like human
beings, we have a set of technologies that can also learn much like human beings do
and can do a lot of the cognitive functions, not all of them but a lot of these cognitive
functions that human beings essentially do. Here is a sense of how modern AI
algorithm works, this idea of attribution. So, let me talk about this idea of attribution in
detail. If you look at, to the left hand side of your screen, you can actually see a
particular kind of a problem solving question, right? some of us might like it, some
others might actually hate it. So, this is the idea of using attribution, right. So, if you
can read these patterns, you can actually predict what is likely to come next. Right?
So, this essentially goes back to the rain example that we spoke about, right. Even in
the rain example we tend to make predictions using attribution or learning through
experience. Here is this idea of learning through experience and where does this
experience essentially come from? It comes from past data. Boxes that you actually
see in your picture is what is actually used to learn about these patterns and once, you
know, the human brain has actually learned these patterns then you can actually start
to make predictions. This is similar to how modern AI algorithms actually learn and
predict as well. So, modern AI algorithms hence, you know, are based on learning
which is through this idea of attribution or pattern matching itself. To summarise, the
big difference between expert systems versus modern AI is the fact that expert
systems were not truly intelligent, right. Because they were based on that fundamental
premise that all kinds of problems can be encapsulated in the form of if-then logic.
Human beings are far more intelligent than that, right. So, we deal with a bunch of
structured problems that can be encapsulated in the form of if-then logic but we also
deal with a lot of unstructured problems which cannot be encapsulated using this logic,
right.
ISB 4
Modern AI systems gets very close to human cognition because they are based on
this idea of pattern matching attribution or pattern matching which subsequently
enables machines to make predictions through this process of learning. So, the
process of learning comes from this experience, right. So, this idea of pattern
matching, which we just saw a few minutes back. And that is the big difference
between expert systems and modern AI algorithms. More importantly, from our
perspective, what It also means that this is really the first- first time that there has
actually been a breakthrough, in this and- which gets is very close to human cognition.
There is a breakthrough in which the technologies can itself learn, much like human
beings do, which is unparalleled, which is a kind of breakthrough that we have never
seen in the past.
ISB 5
ability to process these kinds of moves is finite. But we use this whole idea of cognition
and intelligence. Right? In 1996, Kasparov won based on strategy, right? which is an
idea of that is based on cognition. as opposed to this brute move. This is how human
beings make decisions. So, in an in a nut shell, these expert systems based artificial
intelligence can only apply to a subset of problems which can be encapsulated in this
if-then logic, which is what this Deep Blue example essentially exemplifies.
So, let's fast forward a few years to this idea of AlphaGo, right? So, AlphaGo is
actually, AlphaGo is actually a game, which is far more complicated than Chess. And
few years later, a company called DeepMind wanted to challenge the reigning world
champion in this game, right. What did it do? There were a couple of instances that
essentially pointed out that AlphaGo was essentially different from this idea of Deep
Blue. How was it different? There was this MOVE 37, right, which essentially work the
machine made, after years of calculating, you know, you know learning, how program
is essentially played. And once it understood, how programmers are different players
essentially played, it then came up with a very rare move known as MOVE 37 which
was very different from anything that anybody in the world had actually played, right?
And how did it determine that this was essentially the right move? It was by this idea
of strategising, using intuition. Much like what human beings actually do. It essentially
said that okay. Here is a move that is very different. And how did it know whether it
should make that move?
It was based on this idea of attribution, right? If them, if it made this move, you know,
maybe there would be the certain results and so on and so forth, right? And this is very
different from how Deep Blue actually work. Simply because Deep Blue just did a
search using these bunch of if-then encapsulation itself. Over here, this MOVE 37
came about, only because it was able to match patterns and the machine was thinking
very, very much like a human being. Right? In fact, when this MOVE 37 was made by
the computer, many of the experts and even the opponent who was playing against
the machine did not really understand, what this move was all about. They thought it
was actually bizarre. Right? Sure, you know, there were lots of similarities between
Deep Blue and AlphaGo in the sense that even AlphaGo used a serious machine.
1920 CPUs and about 280 GPUs and so on and so forth.
But the key idea behind MOVE 37 was that the machine actually learned from the
moves that its opponent was making, as well as its experience. And it came up with a
very rare move that nobody had actually in the history of the game, had actually made
before. And that was not what Deep Blue was actually doing. And how did DeepMind,
which is the company that made the, made the algorithm essentially develop the
algorithm. Right? And and this would give you some insight it of why this algorithm
might be different from Deep BLue. Right ? So, the algorithm was developed based
on this idea of learning. Right? Initially at day zero, the system started off with having
no prior information about the game at all. All it knew was the rules that govern the
game, right? And what what it then did is that the system essentially played against
itself, right? And of course, you know, the underlying algorithm that was used, which
we'll talk about later, is this idea of neural network.
ISB 6
So, as the machine was combining these neural networks into one large neural
network, it was simply having a lot more patterns to analyse and learn from those
patterns as well. Like much like what human beings essentially tried to do, that we
acquire experience simply by playing versus many opponents. And that's exactly what
the machine was doing except that in this case, it was playing against itself. Eventually,
in about 21 days, the machine became as good as most players of AlphaGo, right?
And subsequently, after 21st day, when it reached the 40th day, the machine
essentially was able to be better than even the reigning world champion, right? So,
this is the process by which how DeepMind or AlphaGo, right, was developed itself.
And this is very, very different from how Deep Blue essentially work. Deep Blue was
essentially working, using a search algorithm-combinatorial optimisation, which is just
an exercise in searching, right? But over here, that is attribution, that is learning from
experience.
And it is the essentially this experience that enabled the machine to come up with this
magical move, Move 37 that showcased to the entire world that look here are
machines that can be as intelligent as human beings. And human beings are may not
be the only ones to have the power of cognition, right? machines can come very close
to it. Hence, the current set of technologies are getting as much closer to human
cognition, which was never the case before. And hence, what we're looking at, what
we are staring at. Right now, when we are talking about AI and this is where the
excitement really is? What we are staring at with AI, is a bunch of technologies that
can get us very close to human cognition. And hence, solve lot more problems, or the,
the same set of problems that much like human beings can actually do.
ISB 7
So, what is machine learning? The whole idea is to detect patterns. And where do
these patterns essentially come from? Just like we have experienced, you know these
patterns are learned by the algorithm from large data. And why do we need to learn
patterns? Well, the objective is to make predictions, right? So, go back to the rainfall
example that we spoke about. The whole idea is that we know under what conditions
the look and feel of, you know, the sky that essentially would lead to rainfall. We can
simply make predictions about whether it's going to rain or not, right? So similarly, the
machine also learns patterns. How does it learn patterns? From large data. Like we
have experience, machine has large data, right? So, that's the relationship between
how humans think and how machines think. Machines discern these patterns from
large data and eventually make predictions. So hence, we call this as learning from
prior data and making inferences from prior data. So, the idea of making predictions
is to draw inferences. The idea of deduction of patterns is learning from prior data.
These algorithms can predict an output, right? Which is the idea of supervised
learning. Or simply if you classify the data into different groups or homogeneous
patterns in the data, which is what unsupervised is. So, if you want to really predict an
outcome, like rainfall, then that would be an example of supervised learning. If you
simply had large amounts of data and you simply wanted to classify them into different
clusters or groups, that would be an example of unsupervised learning. And these
algorithms can also adapt when, you know, you get new data as well, much like how
human beings do, right? So, if we have new data, you know, we update our
experiences and we make more stronger or more accurate predictions. Much like that,
machines can also do that. It can respond to new data, right? Whenever there is new
data, the machine can actually learn patterns from this new data and make, maybe,
better predictions. Machine learning mainly uses structured data. For example, sales
data, production data, human resource data and so on. Here is an example of what
structured data means. So, here is an example of a hospital. And the prediction
problem over here, which is highlighted in bold, is this idea of trying to predict whether
a patient that comes to a hospital is likely to face an emergency C-section or not.
Emergency C-section in some countries are also known as caesarean, right? So,
emergency C-sections, you know, how the idea of operating to get the baby out is
what an emergency C-section is. So, this hospital is, you know, the machine or the
algorithm in the hospital has a bunch of data with it, right? And remember, this is
structured data and we are going to use supervised machine learning, right?
Structured data because the data essentially has a structure, in terms of age, in terms
of whether the first, whether it was the first pregnancy for the patient, whether the
patient is anaemic, whether the patient is diabetic, whether the previous birth was a
premature birth, whether the ultrasound is normal and whole bunch of other factors as
well. Right. So, this is the structured data that's been collected for every patient that
visited that hospital, right? So, there is patient one, there is patient two, there are n
number of patients as well. So, what the algorithm is essentially going to do using a
supervised machine learning. Remember, there is an output over here. The output or
the prediction to make is, whether a given patient is likely to face an emergency C-
section or not. And how is the algorithm going to predict that? Well, it's going to learn
from all these patient records, right? The n number of patient records that we actually
have at our disposal, and that's what it's essentially going to learn from, right? So, this
ISB 8
is an example not only of structured data, but this idea of supervised machine learning.
Unsupervised machine learning, on the contrary, would simply take this data and
divide it into groups. There would not be an outcome that is, that is needed to be
predicted, much like the emergency C-section over here. So, in this example, we were
given about, let's say, 10,000 patient records, each describing the pregnancy and the
type of delivery that happened in the past. And we are, we are feeding the machine
with about 215 different features of the patients, right? Attributes of the patient. And
we are using this algorithm to predict whether a given patient is at a high risk of
emergency C-section or not, right? So, this is an example both of structured data as
well as supervised machine learning.
So, deep learning, on the contrary, these are algorithms that mimic the human brain
for deduction of objects, speech or even making decisions, right? This has been touted
as a huge breakthrough simply because it uses all kinds of data that is unstructured,
right? We have pictures, we have sound, we have speech and so on and so forth. So,
for the Alexa or Google Aficionado, this is essentially, you know, what is at play. Deep
learning algorithms is what these engines essentially use to discern human speech
themselves. How does it discern these human speech? It uses unstructured data.
Speech is unstructured data, so are pictures and so on and so forth. And how are
these essentially stored? How are these patterns essentially discerned from
something that is unstructured, such as speech, pictures and so on? Underlying to
that is, that it uses this idea of neural nets, right? So, these are simple processing
nodes that are densely interconnected, and the idea of these interconnections is to
figure out patterns from the data. The network can then take vast amounts of data and
structure them as multiple layers and learn complex features from the data, right?
Once you've stored, once these algorithms have stored the underlying data as neural
networks, it can then take these vast amounts of input data, structure them and discern
patterns from them, right? This is an example of neural nets. The big difference
between machine learning and deep learning is that machine learning essentially uses
structured data, right? So, I give you an example of what structured data is. In the case
of deep learning, deep learning algorithms essentially use data that is unstructured,
much like pictures, voice, videos and so on and so forth, right? And this is what has
been used as the basis in the case of deep learning. Voice, data, pictures is what is
used as a basis for learning and discerning patterns through the process of learning.
Here is an example of deep learning, right? So, the very famous example and this is
arguably a very specific instance of what we see emerging these days, right?
The autonomous vehicle, ALVINN. So, here is one particular algorithm, there are
others as well. This was developed in Carnegie Mellon University and the algorithm
was called as ALVINN. Right? Autonomous Land Vehicle In a Neural Network was
the expansion of ALVINN. And this is an algorithm that simply follows the road. So, it
only does one aspect of autonomous driving, right? Which is just follow the road. The
computer, what does the computer do? The computer takes images from cameras,
multiple images. And these are somewhat stored as neural networks. And there are
different hidden layers in these neural networks. And the patterns of which way the
road is essentially turning or not is essentially discerned from these neural networks.
And what does it do? Once it discerns these patterns, it produces an output that
ISB 9
essentially tells the computer or even the driver, what is the direction of the road?
Which way is the road turning? Is the road straight? Is it returning towards the right,
turning towards the left? Or is this a dead end, right? All kinds of decision, and that's
the output that it's trying to generate. But the key difference between the hospital
example that we saw and this one is that the input data is essentially unstructured.
What is the input data? These are images that the camera, that is embedded in this
ALVINN algorithm, essentially takes, right? That's what distinguishes this algorithm as
deep learning. And this is in some sense an example of using unstructured data to
make predictions.
ISB 10
APIs and user interfaces to integrate and perform repetitive tasks. There sometimes
they take the form of scripts that emulate human processes. But the key thing over
here is this idea of repeatability. Trying to do the same task the same way in order to
ensure consistency and hence, maybe higher quality.
And hence, you should start to think about what kinds of jobs it might supplement
versus what kinds of tasks it might essentially take away. So, here is a pictorial
representation of RPA, right? So, this is an automobile manufacturer and there are a
bunch of robots around the vehicle that's being assembled over here, right? If you
stare at the picture, and these reports are not designed to be intelligent, so there if
there are sort of not learning. Also not is that, there are no human beings around,
right? So, this is completely automated set-up in which there is absolutely no human
intervention.
Maybe the human interventions comes a little bit later in the process of assembling
these cars. And the idea is to ensure consistency of quality, right? The quality
consistency when these cars are essentially made is done because these robots are
performing these repeated tasks in order to ensure consistency and quality. If the
human beings were actually doing the same set of tasks, maybe there would not be
as effective, simply because human beings can be tired. You know, you probably need
to have them in shifts because they might get tired. And this whole idea of starting and
stopping means that, you know, maybe the process, the production process itself is
not consistent.
So, this just this idea of automation, right, also gets you to think about the fact that
maybe automation means that human beings are being replaced. But think about what
is going to happen to AI in our context, right? So, is that a possibility that AI is going
to take a bunch of human jobs? Which kinds of jobs? On the contrary, can AI
essentially supplement human beings? Right? So, you have to sort of mutually
learning systems, Human beings and machines, can they actually help each other
out? So, those are some of the topics that I would like you to think through as we we
are talking through this automation exam.
So, AI is unlike other automation technologies because AI intelligent. What we mean
by intelligence over here? Because it incorporates learning. Given that AI technologies
can learn, AI technologies are essentially more intelligent than any kind of automation
technologies like RPA. RPA use, simply uses structured inputs and logic, much like
any other expert systems in the past. But it's not even trying to be a general problem
solver, right? RPA is for a specific task, like assembly of cars that requires lots of
consistency.
On the contrary, AI uses even unstructured inputs. It can work with structured inputs
as we saw before. But it can also use unstructured inputs, and it sort of, develops its
own logic, right? Where does this logic essentially come from? This idea of pattern
matching, this idea of attribution, right? That's where it's essentially discerning these
patterns from. It developed its own logic. This is not something that automation
technologies typically do, which is why we might be in the cusp of seeing something
that is, that is very close to have human beings essentially learn and react, given a
stimulus, right? So, once again I want you to think about AI on those terms and put AI
ISB 11
in that perspective. And this is the reason why I say that, maybe AI is a technology
that, unlike other that we have seen in the past is a technological breakthrough, it's a
technological breakthrough. maybe unlike others that we have seen in the past, right?
Something that I want you to think about at this point.
ISB 12
thousands of staffers who are engaged in sifting through the suspicious transactions.
What they ended up doing is they re-deployed these human beings to focus on more
complex and suspicious cases for an in-depth analysis. So, here is a case wherein,
you know these human beings, who used to otherwise do these mundane tasks were
deployed for more value-added tasks themselves. This is one of the fallouts of the use
of AI for this large global bank.
There are several examples in healthcare as well. For example, PathAI, a company,
develops technology for pathologists, pathologists are doctors who treat cancer to
make more accurate diagnosis for cancer. At this point in time, the use of deep learning
for cancer diagnosis has made diagnosis incredibly effective, right. Massive advances
in deep learning and computer vision implies that the accuracy of cancer deduction
has increased leaps and bounds, relative to when the doctors used to do them
manually. Right, at this point in time, we are very close if not better than human
diagnosis itself. And this is just an example of the use of deep learning for analysing
or diagnosing cancer itself.
The another example is the use of AI in marketing, right. If, for example, based on a
customer's previous buying patterns, needs could be identified even before the
customer actually felt the need; companies can actually increase revenues. So,
companies identified customer needs to predictive analytics, and this has enabled
certain types of companies to increase revenues by about 21%, year on year as well.
So, Starbucks, typically does that, Starbucks figured out that it can, using their loyalty
card programme, which provides them data of day-to-day customer purchases. It can
use predictive analytics to predict what the customer might actually need. This resulted
in an increase of revenues of about 21% compared to an average of about 12% without
predictive analytics. So, the bump up in revenues was about 9% because of the use
of AI. And once again, this applies the technique of machine learning that we spoke
about because the data that would come about from a loyalty card would be structured
data and by the use of machine learning algorithms, the purchase behaviour of a
customer can be predicted based on the past buying preferences of the customer.
Another example of the use of AI, or more precisely deep learning, is in automated
driving. Like Tesla, for example, tests drivers against the software simulations based
on deep learning and neural nets that runs in the background on the car computer,
right. The idea is to develop more better systems for driving, better autonomous driving
systems and the idea of how do, how would we know if something is better or not?
The idea would be to benchmark it with real drivers. But even the background software
is safer than a bunch of drivers that it is deployed in a car, only then they would imbibe
it into a system. In this case, the AI powered autonomous driving system creates lots
of opportunities for test drivers because the idea of benchmarking or the need to
continuously benchmark the autonomous driving software means that you have to
keep benchmarking this with a bunch of test drivers. This provides a lot of opportunities
for test drivers itself, right. Here is a case wherein the use of AI increases employment
rather than substituting employment, as this is popularly believed.
ISB 13
So finally, a Bloomberg, which is... which a lot of us would have heard about, which is
in the financial sector, uses a research platform, and this uses deep learning
techniques to classify data or classify papers by topic, right. Bloomberg does a lot of
research and lot of research gets done, and the output is typically made us papers.
So, Quid research platform typically classifies these papers based on AI machine and
machine learning algorithms. So, what it does is that it forms clusters of papers or
similar papers, and it also figures out the connections between them. It has the
capability of doing the same for patents as well. In the absence of this kind of a
software, researchers would have to spend hours and hours sifting through a bunch
of papers, analysing them and eventually classifying them into clusters and then
figuring out the connections between them, right. So, the platform here not only did a
much better job of classifying papers, but it also did it at a fraction of the time that
human beings would actually do.
In fact, lots of pharmaceutical companies use this idea as well. Lots of pharmaceutical
companies required research in order to come up with new cure for a disease. Lots of
pharmaceutical companies use the Quid like tool for categorising literature in a bunch
of papers. This just makes it much simpler for researchers to look for literature and
build on that, build on the findings of the literature themselves. Lawyers who often
require jurisprudence, they have to look at previous cases, previous decisions, in order
to figure out how to frame a particular case, right. This kind of a platform is also used
by lawyers when they are searching for topics on a particular legal issue. Patent
lawyers, for example, use patent similarity to decide whether, you know, the patent
that they are drafting is unique and different from the patents that are out there before
they even decide to file a patent on behalf of a client.
Video 7: Growth of AI
So, why is there more of a buzz about AI more so now than before? One is the set of
reasons that we spoke about before, so the evolution of the technology itself. But there
are other reasons as well, and I will briefly take you through some of these other
reasons as well. So, here are a few reasons that because of which there might be
more of a buzz about AI now more so than before, that are not related to how
technology itself as evolved. There are at least three reasons that I'd like to talk about.
These are showcased by three different lines. One is about processing power, which
is the green line. The second one is about storage cost, which is the yellow line. And
the third one is the growth of data, which is the red line. So, if you stare at the yellow
line, the cost of storage has decreased exponentially since about 2000.
So, if you look at the picture in 2000, the cost of storing 1GB of data was about $12.47,
which has decreased all the way to about $0.004 in 2017. That's an exponential
decrease in the cost of storing data. Now, organisations have the ability to store large
amounts of data for a fraction of the cost. The second reason is about processing
power. So, if you stare at the green line, you can see that the processing power has
increased by about 10,000 times since 2000. That's a lot of increase in processing
power itself. Finally, now we have access to lots of datasets, public datasets as well
as private datasets more so than before. This is simply because lots of data is being
ISB 14
digitised not only by private companies, but by governments and other entities such
as the such as the World Bank and so on and so forth.
Which means that there is lot of data that's available, and the red line simply
showcases the exponential growth in data from 2000 as well. So, these three, the
confluence of these three factors implies that now AI just not a possibility, right. There
is a possibility for private organisations to do AI at such a large scale, which is why
there is perhaps more of a buzz about AI now more so than before. It may not be just
because of all the developments in technology. In fact, all the developments on the
technical front might actually be a consequence of these three factors that we have
spoken about. This means that it's possible for even small private companies or
smaller entities to predict lots of things, lots of phenomena at a fraction of the cost it
with that it used to cost them before. Like I said, this confluence of three factors also
means that there is lots of investments in AI, right.
So, if you look at the picture, the picture to your left suggest that the investments in AI
has exponentially increased as well between 2009 and 2019. I mean investment by
VCs, investment by companies and so on and so forth in AI technologies, and these
are in different areas. Like the picture right in the centre of your screen suggests, these
investments are being diverted to different areas of AI, for example autonomous
driving seems to be the most popular area of investment, cancer and drug
development, something that we saw a few minutes back also has attracted lots of
investment. There have been significant investments in facial recognition,
development of digital content itself, fraud prevention, property protection and
semiconductors as well. So, lots of areas in which there have been increase in private
investments in AI which are creating all this nifty AI applications as well.
Even in the research front right, the fact that there has been all this growth in the utility
or the utilisation of AI, means that the amount of research has also increased, once
again exponentially since 1998 as the picture towards your right side of your screen
tells you. In 199, from 1998, all the way to about 2018, the number of papers that have
been published in AI have also been increasing. And this suggest that because there
is lots of use of AI, the research community is also working very hard to come up with
newer ideas and newer algorithms in the AI domain as well. And as a result of all of
these, there have been significant improvements in several areas of AI that or several
areas that use AI. For example, in image processing, the accuracy of image
processing has increased dramatically from about 2010 to about 2019.
So, if you stare at the picture towards your left, the identifying the subject of an image
is now almost 100% accurate. It wasn't the case. It was only about 70% accurate in
about 2010. Similarly, the idea of object segmentation, separating multiple terms or
multiple items within an image has also increased dramatically all the way from the
mid 20s to about the mid 40s, right now. Similarly, there have been significant progress
in language processing. So, if you think about sentence parsing, sentence parsing is
almost 100% accurate these days using AI. And translation has seen significant
progress as well. It only used to be about less than 20% accurate in 2008. It's all the
way up to about 50% accuracy in about 2018.
ISB 15
So, as a result of all of this research, which is once again because of the confluence
of the three factors that we spoke about, there have been significant advances in
different domains that use AI as well. To recap, there is more of a buzz now than
before because the cost of prediction has decreased rapidly. As a result,
organisations, even small ones, can actually afford sophisticated production tools now
more so than before. And knowing all of that, there has been significant amount of
private investment in AI. And this has also probably fuelled the amount of research in
AI as well as the advancements in different domains that use AI as well.
Video 8: AI as a GPT
AI has been touted as a General Purpose Technology, GPT for short. Before I tell you
why that is important, I have to tell you what general purpose technology is all about
and why maybe AI is then a general purpose technology. So, this picture suggests
that general purpose technologies typically increase the GDP of a country for the same
amount of input. So, that's the reason why general purpose technologies are important
for a nation's progress or even for economic prosperity. But what are GPTs? What are
general purpose technologies?
These are technologies that are characterised by pervasiveness. They're used across
a broad array of sectors, and they're used as inputs in many downstream sectors.
Right. So, the sectors that use AI. Right. So, it's suppose AI is a general purpose
technology. Its pervasiveness would simply imply that it's utilised across a broad array
of sectors. A general purpose technology has the inherent potential for downstream
technological improvements, which means that if a particular technology is a general
purpose technology, there are improvements not only in that particular domain that the
technology relates to but across a broad array of sectors once again. And it a general
purpose technology also has the ability to spawn many complimentary innovations.
The use of AI or use of any general purpose technology for that matter has to have
the capability of spawning different kinds of innovations, not just not just AI but other
innovations as well. For example, computers, which has been touted as a general
purpose technology, has created whole bunch of newer business models or newer
ways of engaging with the customer. It has also improved automobile vehicles, even
flying or aviation, for that matter. That is an example of general purpose technology
simply because of its ability to spawn complimentary innovations across a broad array
of sectors. And as GPT improve, they spread throughout the economy, bringing about
productivity gains, right, which is what is shown in the picture as well. And general
purpose technologies, by spurring innovation across sectors over time, can enhance
productivity without altering the inputs as it's once again shown in the picture.
The reason why these inputs are increasing is not only because of the if its use across
of broad array of sectors, but also the ability to spawn complimentary innovations
across a broad array of sectors. This is the reason why, for the same amount of inputs,
a country or even a firm for that matter would be able to produce more inputs with the
use of a general purpose technology. AI is a classic example of a general purpose
technology simply because of its potential for broad base use across sectors and its
ability to create complimentary innovations across a broad array of sectors. It also has
ISB 16
the inherent potential for technological improvements in several sectors that it's being
utilised as well. So, AI is now touted as a general purpose technology for these sets
of reasons. In this picture, you can see the broad array of sectors that utilise AI. There
is also this graphic that shows that its use across a broad array of sectors has also
increased over time. So, that picture essentially shows you two things. One is just the
its broad applicability, it's increased over time and even within a given sector, the use
of AI is also increased as well.
So, if you think about Neuro, for example, in the picture which is represented by the
yellow rectangle, the yellow rectangle has increased over a period of time. So, not only
is AI the use of AI has increased over a period of time. But its use is caught up or
increased in pace relative to the other sectors as well, once again over a period of
time. So, this suggest once again, this is a data point that suggests that AI has the
potential of being the most important general purpose technology of our era. What's
note is that AI can also automate a whole range of tasks. So, if you think about different
tasks that an organisation actually does, there are task that are cognitive and routine.
Cognitive involves some kind of intelligence, some kind of thinking. Routine is
something that is manual and something that is repeatable. So, there could be task
that are cognitive or manual and task can also be routine and non-routine as well. So,
you have these four combination of task. You can have task that are cognitive routine,
that are cognitive non-routine, you could have manual and routine task and manual
and non-routine tasks as well. The idea of the point over here is that AI has the
potential of influencing each one of these tasks. AI can help automate a bunch of
manual task, but AI can also improve cognitive tasks as well. Applications of past
automation were limited to areas where knowledge was codified and at least
codifiable. Right.
So, we spoke about expert systems, the whole idea of being able to encapsulate a
particular logic in terms of set of written rules and that would be an example of
automation. So, if you are thinking about automation technology such as the RPA that
we spoke about, RPA only has the potential of automating some of these manual and
routine tasks. Whereas AI, given its ability to also deal with cognitive functions, has
the ability to deal or automate or even help or supplement the cognitive routine as well
as the cognitive non-routine tasks. For example, deep learning can substitute workers
in a wide range of non-routine cognitive tasks, although right now, it might be
incomplete or only partial. So, the bottom line is this idea that AI has the potential to
influence a broad array of sectors and a broad array of tasks within an organisation as
well.
And because of this, businesses now feel that they have to embrace AI. Right. Here
are a few numbers that suggest how important AI might be for sustainability of
businesses as well. 84%, for example, of C-Suite executives believe that they will not
be able to achieve their growth objectives without scaling up AI. And 76% of C-Suite
executives suggest that or they struggle how to scale up AI. So, they realise the
importance. But nonetheless, it seems to be a struggle. 75% of C-Suite executives
believe that they will be risking going out of business if they don't scale up AI. So,
these three numbers tells you that the importance of AI and the difficulty of firms to
ISB 17
scale up AI. The other picture, the triangle, also shows you that the use of AI creates
all kinds of benefits for firms. For example, it increases the market price-to-sales ratio.
So, this is an indication of how important sales is to a company. Right. So, the higher
price-to-sales ratio means that a single dollar of sales translates to some amount of
market prices, right, the stock market share prices. So, the increase the bump of that
companies get in the stock market prices as as due to the utilisation or the use of AI
is about 28%. The price-to-earnings ratio. So, the ratio of the market, the stock market
price and the earnings of a company, right. Earnings is the profits of a company, jumps
up about 33%. The value of the company, which is the total number of shares
outstanding multiplied by the market value of the stock in the market, which is the
enterprise value, and that the ratio of that to the revenues jumps up by about 35% as
well.
So, AI creates all these benefits for organisations because of which the C-suite
executives understand its importance. But yet it's a struggle for organisations to
embrace AI. And to recap, AI is a general purpose technology that is applicable to a
broad array of sectors and businesses. And the risk of not adopting AI is very high,
just like one of the numbers told you that the risk of going out of business increases if
organisations do not adopt AI. So, the bottom line is that even if you don't adopt, given
that AI is a general purpose technology, your partners and competitors might actually
end up adopting it. In the short run, the simple use of AI will be a source of competitive
advantage. So, you have it, but maybe your competitors or partners don't have it. It
might be a source of short-term competitive advantage. In the long run however, the
combination of AI and business integration would be crucial, not just for the survival of
firms but also for long-term, sustainable competitive advantage. For all these reasons,
companies need to get started early. And early start is important not only to understand
the use of AI, but also to under customise it for a respective business and refine its
use.
ISB 18