0% found this document useful (0 votes)
72 views18 pages

Ethical Dilemmas of AI Technologies

The document discusses three examples of artificial intelligence - IBM Watson, intelligent personal assistants, and self-driving cars - and evaluates some of the ethical dilemmas posed by each. For IBM Watson, when used in hospitals for cancer diagnosis, act utilitarian ethics are satisfied by more efficient treatment of patients, but rule utilitarian and rights ethics may be violated if patient data is used without consent. Both patients and doctors have ethical concerns if artificial intelligence use is not transparent or requires patient approval.

Uploaded by

Tuan Nam Tran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views18 pages

Ethical Dilemmas of AI Technologies

The document discusses three examples of artificial intelligence - IBM Watson, intelligent personal assistants, and self-driving cars - and evaluates some of the ethical dilemmas posed by each. For IBM Watson, when used in hospitals for cancer diagnosis, act utilitarian ethics are satisfied by more efficient treatment of patients, but rule utilitarian and rights ethics may be violated if patient data is used without consent. Both patients and doctors have ethical concerns if artificial intelligence use is not transparent or requires patient approval.

Uploaded by

Tuan Nam Tran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

I.

Introduction
Artificial intelligence (AI) is a term used to describe the ability of a machine to perform tasks
analogous to how humans do. Hence, it is a broad term that encapsulates all technology that can
assess its environment in one way or another and can take an action that maximizes its chances
of success.
There are widespread fears in todays society that artificial intelligence will develop until it is
smarter than humans. This could have dangerous consequences if we do not make sure we have
these technologies under control throughout their development, which is what the signatories of
the open letter on artificial intelligence decided. However, up until now there is no existing
artificial intelligence that has reached the same reasoning abilities and analysis that humans are
capable of. Artificial intelligence is at a state where it can merely assist humans by helping them
perform tasks faster.
For this report, three examples of artificial intelligence have been selected in order to
highlight a few of the many ethical dilemmas AI is currently posing. They are Watson by IBM,
Intelligent Personal Assistants and self-driving car.
In order to evaluate some ethical dilemmas of artificial intelligence in the future, Roboethics,
machine ethics, and unintended consequences will be discussed in terms of the three selected
applications.
1. IBM Watson
IBM Watson is a computer system that understands questions posed in natural language. It
has access to vast amounts of data (a huge database), which it uses to understand the question
and come up with an answer that has the highest chance of success.
IBM has developed Watson so that it can be implemented in many industries such as
medicine, law, and business analytics. In Medicine, Watson is currently being used in 14
hospitals in the US and Canada for quicker cancer diagnosis and to add more patient data to the
system. Watson uses this patient data and an infinite number of medical resources to reduce the
number of tests needed to diagnose types of cancer. Watson works in conjunction with the
hospitals in order to ensure that it doesnt make mistakes.
In business analytics, Watson is used to analyze a companies (companys) data in order to
make forecasts for the future and recommend possible options. Watson can perform such
analysis (in) various divisions such as operations, sales, finance, human resources, and
marketing.
2. Intelligent Personal Assistant
Intelligent personal assistant is a general term used to describe software systems such as Siri,
Google Now, and Cortana. These systems have been evolving rapidly. Similarly, to Watson, they
understand natural language. However, they are meant for personal small scale use, unlike
Watson. Also they dont (do not) search through large data banks like Watson. Their main
function is to take commands from the user so that the user doesnt need to type anything or
press anything (verbally). Some of these commands being to call (include calling) someone, set
(setting) an alarm, search (searching) something on the internet, open (opening) an app, set
(setting) a reminder, check (checking) the calendar, etc. They can also answer (are also capable
of answering) basic questions such as Hows the weather today?. However, it is clear that the
level of intelligence is not the same as Watson. Up until now these intelligent personal assistants
are available on phones and computers
3. Self-Driving Cars
Many researchers and private companies have been investing a lot of time and money into
developing autonomous cars for the past few decades. Although they are still under
development, many are coming extremely close to being commercially available. Google,
especially, has developed a software called Google Chauffer for driverless cars. Google is testing
this software on existing cars such as the Toyota Prius, Audi TT, and Lexus RX450h. A few
number of states in the United States have already given the approved the testing of driverless
cars on public roads. Hence, it is clear that Driverless cars (are) closer to the present than we
think.
Googles driverless cars have a range finder mounted on top, which (to assist) the software to
create (in creating) a detailed 3D map of its environment. The system then compares these
generated maps with existing high resolution maps of the world allowing to come up with
enough information to drive itself.
[Link]
[Link]
[Link]
[Link]
II. Present Ethical Dilemmas
1. IBM Watsons.
Medical diagnosis

Watson is used in IBM Doctor Patient


hospitals to help
Act- Utilitarian Ethics
diagnose patients and
develop a data base of Satisfied Satisfied Satisfied
patient data large
Rule- Utilitarian Ethics
enough to be adopted in
all hospitals. Violated if Violated if patient Not participant
patient doesnt not informed and
permit IBM to patient doesnt
store their permit IBM to
medical data store their medical
data

Rights Ethics

Violated if Violated if doctor Violated if patient


patient doesnt doesnt get consent not informed and if
permit IBM to from patient to patient doesnt
store their allow Watson to permit IBM to store
medical data store their medical his/her medical data
data

Duty Ethics

Satisfied Satisfied Satisfied

Virtue Ethics

Satisfied Violated if patient Not participant


not informed
To evaluate a present ethical dilemma when using Watson for cancer diagnosis, the above ethical
table will be analyzed for the action of using Watson in selected hospitals for cancer diagnosis of
a patient, in order to train Watson for mass usage across all hospitals. Watsons job is to make
diagnosis faster for the doctor.
The utility of using Watson for cancer diagnosis is to treat more patients by using resources more
efficiently. This utility is the same regardless of the party in consideration. It is clear that act-
utilitarian ethics are satisfied since the utility of Watson will save more lives than doctors can
save by themselves, if Watson is trained properly with experience and patient data. However, it
could be argued that if IBM doesnt get permission from the patients to save their medical data,
IBM is breaking laws concerning privacy. This would mean that rule- utilitarian ethics are
violated.
From IBMs perspective, not all ethics are satisfied. IBM Watson has the right to develop
Watson to provide a service and make the company profitable, but doesnt have the right to use
patient data without patient approval. Hence, rights ethics are violated. However, duty ethics are
satisfied because as a technology company their duty is to constantly improve their product and
make it useful for society. Lastly, virtue ethics are satisfied because even if Watson uses patient
data without patient approval, the way Watson works is clear to everyone, which shows the
companys transparency.
For the doctors or hospitals, some ethics are violated if the patient is not informed that Watson,
as a form of artificial intelligence, is being used to store patient data and diagnose. Rights ethics
are violated because the doctor doesnt have the right to share a patients confidential data with a
third party, IBM. Duty Ethics are satisfied because the doctor is fulfilling his duty of treating and
diagnosing the patient to his best ability, which might involve the use of artificial intelligence. If
the doctor does not inform the patient that artificial intelligence is used for diagnosis, then the
doctor isnt being transparent with the patient, meaning virtue ethics are violated.
For the patient, not all ethics are satisfied. Some patients may not feel comfortable knowing that
their treatment relies on artificial intelligences decision. However, if patients were required to
approve everything that a doctor decides, then every patients treatment would be delayed by
bureaucracy, meaning that act- utilitarian ethics wouldnt be satisfied. Therefore, doctors should
be given the freedom to do their job. Duty and virtue ethics are satisfied because it can be argued
that a patients duty is to allow and trust the doctors judgment.
However, Rights ethics are violated if the patient isnt informed and doesnt give IBM consent to
store his medical data.
Clearly, many ethics are violated if the patient isnt involved when it comes to implementing
artificial intelligence for development. The easiest solution in this case would be to allow the
doctor to use Watson whether the patient agrees to this or not. However, the doctor should give
the patient the choice of deciding whether they would like to allow IBM to save their medical
data. If they do not agree to this, Watson is still used but IBM doesnt save that particular
patients data to improve Watson for future diagnosis. This solution may question whether utility
is maximized but satisfies all the other ethics.
Business analytics
For further assessment of ethical dilemmas when it comes to Watson, the line diagram below is
used to evaluate the ethical dilemmas when IBM Watson is used in consulting for
retrenchment/restructuring purposes. This is a possible scenario when considering the agreement
between DBS and IBM to use Watson mainly for client experience but possibly also to assist
with consulting jobs[1][2].

NP SC: 3 SC1 SC2 PP

NP: Watson business analytics is used to make all crucial decisions without management
0 review

SC1: Watson is not used at all and only human analysts are used to make decisions for
6 retrenchment and restructuring

SC2: Restructuring and retrenchment decisions are made based off a comparison between
8 Watsons conclusions and business analysts conclusions

SC3: Watson business analytics is used to make all crucial decision but quality assurance is
2 preformed once in a while to ensure that Watsons decisions are reasonable
PP: Watson is used as the primary decision maker for restructuring and closing down
10 divisions but analysts are used in conjunction with Watson when Watsons decision is
to retain a division by letting go certain people within the division

AI in business analytics for retrenchment and restructuring causes ethical dilemmas since a
machine is re-organizing humans and assessing them purely rationally based on quantitative
analysis.
The negative paradigm would be the scenario if Watson had the power to let people go and close
a companys division to maximize future profits. The machine doesnt take human emotions into
consideration and treats employees merely as assets to the firm. The positive paradigm of the line
diagram explores the optimum scenario in which Watson is simply used as a tool to assist
humans but is entitled to make the final decision when it comes to closing a division. This is
because the analysis to determine the profitability of a division is highly analytical so it is highly
unlikely that human analysts would conclude differently compared to Watson. Double checking
all of Watsons work would defeat the purpose of using Watson.
The line diagram suggests possible scenarios that lie in-between the negative and positive
paradigm. For the first scenario, it can be argued that the company could further maximise utility
whilst still adhering to moral and civil laws. Utilitarian ethics (act-utilitarian and rule utilitarian)
are violated because humans are more emotionally understanding and hence would most likely
dismiss less employees. Although this might result in more employees being happy in the short
term, it might lead to poorer performance of the business, in the long run, which might lead to
even more jobs being cut. Hence, in the long run more people would be unhappy than the
number of people that would initially have been let go by Watson. Duty ethics for the company
are satisfied since it is the companys duty to remain profitable. Virtue ethics are also satisfied if
the company is transparent with its employees. Also, the company has right to let people go as
long as it isnt violating any laws. The employees ethics are largely satisfied as long as the
company is adhering to employment contracts. Employees might feel that this scenario is moral
compared to a scenario where a machine determines another humans value. Therefore, this
scenario was rated as six out of ten.
The second scenario (SC2) is considered a safe scenario. However, it is very time consuming
because some decisions are very mechanical like determining whether it makes financial sense to
retain a specific division. There arent many subjective decisions to be made, it is largely based
on heavy quantitative analysis. Hence, if Business analysts are required to revise and validate
Watsons decisions on whether to close a division, a lot of time and resources could be wasted.
These resources could be used to improve the company grow, hire more people, and create more
benefits for society. Therefore, act- utilitarian ethics arent satisfied. Also, no moral or civil laws
are violated, the overall utility is not satisfied, hence rule- utilitarian ethics are violated. The
company could maximise utility further while still adhering to civil and moral laws.
Additionally, The company has the right to determine how to run the business, so rights ethics
are satisfied. Also, the company is simply performing its duty to do its best to be profitable, so
duty ethics are satisfied. From the employees perspective most rights are satisfied although, as
mentioned before, some employees might not find it moral for a machine to determine the value
of a human. However, most ethics were satisfied, so this scenario is rated eight out of ten.
Scenario three is similar to the negative paradigm. The only difference is that it involved quality
assurance so that Watsons work is revised once in awhile. This was rated a 2 out of 10 because
it would still mean that Watson is making decisions based on very rational and quantitative
analysis, which leads to the most resource efficient solution. This might lead to act- utilitarian
ethics being satisfied, but rule- utilitarian ethics are violated. It can be argued that it is morally
incorrect to put a machine in charge of organizing humans and labelling them according to the,
largely monetary, value they offer the company. In this situation employees are also merely
treated as objects. It can be argued that this would violate both rights ethics and virtue ethics
from the employees perspective. They have the right to be evaluated in a human manner as a
sign of respect towards the work they have contributed to the firm.
2. Intelligent personal assistant (IPA)
Issue of privacy
Currently, service providers for these intelligent personal assistants such as Google, Apple
and Microsoft store and collect users voice input every time they speak to their devices. This
data may be stored in their server and analyzed to improve the accuracy of voice recognition
software.
However, the job of analysing the voice sample is normally outsourced to third-party
companies to cut cost. Even though measures are taken to ensure identity of users are protected
such as assigning random IDs to voice sample, user data such as names, addresses or other
personal information may leak. In this era of information technology, this data can be exploited
for malicious purposes.
[Link]
[Link]

Action Parties

Data of users are collected to Service Provider User


improve the voice recognition
Act- Utilitarian Ethics
software
Satisfied

Rule- Utilitarian Ethics

Satisfied

Right ethics

Satisfied Violated

Duty ethics

Satisfied Satisfied

Virtue ethics

Not applicable Not


applicable

Clearly, for service providers, their duty is to ensure that the IPAs are performing well to secure
customers interest and maximise profit. Hence the act of collecting data satisfies duty ethics.
Right ethics are also satisfied if the users' data is protected and processed anonymously. When
people use these IPAs, they are expected to agree to the T&C which contains clauses regarding
data collection. However, normally people do not read through lengthy paragraph so people are
usually unaware to data collection. Implementing measures to protect clients data will incur
extra costs, but utility is maximised since users are better protected and the companys image is
maintained, so Act-Utilitarian ethics is satisfied. Rule-Utilitarian ethics is also satisfied as law of
privacy protection is upheld.
For customers, right ethics are satisfied if they are informed about the act of data collection and
give their informed consent. The fact that users are presented with the information in the term
and conditions is to make sure that users make an informed choice and take responsibility to
protect their own privacy by discerning before using these IPAs.
To demonstrate the issue more clearly, the following line diagram is constructed:

NP P1 SC1 PP

NP: 0 Service providers do not take measures to protect users data, users are not aware and do
not protect their own privacy.

P1: 3 Service providers take measure to protect users data, however users are not aware about
data collection and carelessly leak their personal information while using IPAs.

SC1: Service providers take measure to protect users data, users are aware about data
7 collection and take responsibility to protect their own privacy.

PP: Service providers do not need to collect users data, and users do not need to be mindful
10 about their input to IPAs.

In this line diagrams, the negative paradigm (NP) is that service providers do not take measures
to protect users data and users are not aware of data collection and hence do not protect their
own privacy. That leaves the users vulnerable to be exploited and their rights violated, and at the
same time, company does not fulfil their duty of providing the best service possible.
The current situation is that service providers take measure to protect users data, however users
are not aware about data collection and carelessly leak their personal information in their voice
input. In this case, users right are violated but it is due to the lack of awareness. Service provider
need to highlight this in their T&C to prompt the user about data collection, may be in the form
of prompt messages, not buried in long passages which users may find too much of a hassle to
read.
One possible scenario is that users now aware of data collection and take responsibility upon
themselves to protect their own privacy. This is not ideal yet since the users do not have the best
experience if they have to watch out for what they say to the IPAs.
Finally, the positive paradigm or ideal scenario is that service providers find ways to improve
their service without the need to collect users data, such as finding alternative source of voice
sample. This benefits both parties and completely solves the problems of privacy.

3. Self-driving cars
Since the driverless cars are still in development, there havent been many incidents
involving the car in an accident. Since 2009, 20 cars under the google driverless car study have
travelled a total distance of 1.9 million miles. During this period the cars have been in 14
accidents out of which 11 were rear-ended
([Link]
three). The latest accident on 1st July 2015 involved four people getting injured. This clearly
shows that driverless cars are not accident proof. In the midst of this, there would come a
scenario when an accident is inevitable and in order to reduce the damage, the car might decide
to injure one party involved in the accident over another party. For instance we consider the
following example and try to apply ethics theories to the particular case.

Action Parties

The driverless car steers Owner Pedestrian Car manufacturer


onto the sidewalk and hits a
Act- Utilitarian Ethics
pedestrian to avoid
crashing and potentially Satisfied
killing the owner
Rule- Utilitarian Ethics

Violated

Rights Ethics
Satisfied Violated Satisfied

Duty Ethics

No duty but moral to Not Satisfied as long as the


compensate the victim applicable customer is aware of
potential risk in
emergency situation

Virtue Ethics

Not applicable(or Not Satisfied as long as


satisfied if applicable company is truthful
compensate the about their algorithm.
victim-being
benevolent)

Act-utilitarian ethics is satisfied for all parties because if the driverless car avoids the accident
on the road, there is a possibility of not hitting a pedestrian as well as saving the owner.
However, in doing so the driverless car is obliged to violate civil traffic laws. Hence, rule-
utilitarian ethics are violated.
Concerning the owner of the driverless car, most other ethics are not applicable. This is due to
the fact that the owner has no control over the actions of the driverless car if the situation occurs
quicker than the owners reaction time. The only ethics relevant are rights ethics. Rights ethics
are satisfied because the owner has a right to be protected from danger by a product that belongs
to him, the driverless car. However, the owner also has the right to be informed of the cars
algorithms used to minimize risk. Hence, the Google, or any other manufacturer producing
driverless cars, should make this clear to the customer in order to satisfy rights ethics for the car
owner.
In this situation, the pedestrian is unable to effect the outcome. Hence, the only ethics relevant
are rights ethics. The pedestrians rights are violated since it is his right to be able to walk
outside, on the sidewalk, feeling protected and safe. Compromising his life, without his approval,
in order to try and minimize damage, violates his rights.
On the other hand, the car manufacturer is in complete control of the actions that the car takes in
an emergency situation. This is because the manufacturer will determine the algorithms that
govern the cars actions. Rights ethics is satisfied as the company has the right to maintain the
image that its product is able to protect the owners life, and actions taken in an emergency will
minimize possible damage. Duty ethics are also satisfied as long as the manufacturer makes it
clear to the community that hitting the civilian is just an unfortunate incident and a choice the
machine might make to minimize risk and satisfy act- utilitarian ethics.
For further analysis of ethical dilemmas of artificial intelligence in driverless cars is,
implementation of the concept in ambulance is considered. Line diagram can be used to assess
the ethical dilemmas of driverless cars in this scenario.

NP SC1 SC2 PP

AI in driverless ambulance would cause ethical dilemmas since a machine is responsible


for following a particular route, with or without following traffic rules, to get to the nearest
hospital to save a human life where the chance of survival of patient decreases with passing of
time.
The negative paradigm on the line diagram is letting the car follow traffic rules and
follow a regular route to go to the nearest hospital. This would mean that the ambulance might
not reach in time to the hospital to save the patient in the emergency. Whereas the positive
paradigm would be that the driverless car does not completely follow traffic rules but maintains a
fairly safe distance from other cars. Additionally the car would take the fastest route to the
nearest hospital. In this case the car would have higher probability of saving the life without
endangering any other.
There can be other scenarios that lie in between the two paradigms. Scenario 1 involves
driverless car not following traffic rules and following the fastest route to the nearest hospital.
This scenario would satisfy the rights ethics of the engineers and managers of the company since
it is their right to design a program that gets the patient to the destination in shortest amount of
time. Rights ethics of patient in the vehicle is also satisfied since its his/her right to reach the
hospital on time. Although, for the other commuters on the road would have their rights ethics
violated since it is their right to travel safely on road without getting into an accident because of
any other party. Act utilitarian and rule utilitarian ethics are also violated in this case since
breaking the traffic rules and ignoring the safe distance would endanger the lives of others while
saving the life of one. Hence there is a negative impact of this on the society overall. Even
though the duty ethics are satisfied by all parties involved as all work towards fulfilling their
duty to save the patient, virtue ethics are violated by risking more accidents on the road.
Therefore this scenario is rated three out of ten.
Conversely there can be scenario 2 which involves the driverless car following traffic
rules and following a fastest route to the hospital. In this case act utilitarian ethics are satisfied by
all the parties concerned since the patient is taken to the hospital as quickly as possible without
threatening any other lives. Rule utilitarian ethics are also satisfied in this case since no traffic
rules are broken in this scenario and the patient is taken to the hospital via the fastest possible
route. Also rights ethics is satisfied for engineers and managers of the company, and other
commuters since its their right to follow traffic rules and maintain a safe environment on road.
Although it is not satisfied for the patient if he/she could reach the hospital faster by breaking
traffic rules and the time difference becomes a huge factor in life and death situation.
Additionally, duty and virtue ethics are satisfied for all the parties since everyones actions abide
by their duty to follow the traffic discipline and moral to help the patient reach destination on
time. Therefore, this scenario is rate nine out of ten.

III. Future ethical dilemmas


The current state of AI is still limited and most of the applications are still in development or
prototype stage. Hence not many ethical dilemmas may stem from it. However, many scientists
agree that in the near future, science and technology breakthrough may enable advanced AI
capable of independent thinking and learning, and possibly even capable of feeling. Many other
ethical issues, which may seem implausible currently, have already been proposed and discussed
by various AI researchers. In the limited scope of this paper, we will discuss some of these
ethical dilemmas by considering Machine ethics, roboethics, and the existential risk of AI.
1. Machine ethics
Throughout the development of technology until now, it has always been an assumption that
humans will always be more intelligent than machines. However, technology is developing so
quickly that it is impossible to predict what the future holds. Hence, machine ethics has become a
great concern for many scientists and tech entrepreneurs like Stephen Hawking, Elon Musk, and
Steve Wozniak, who are all signatories of the Open Letter to avoid the development of artificial
intelligence from getting out of control[3].
Machine ethics can be defined as a term referring to a research field discussing the morality
of machines. If a machine starts to be conscious, discover and learn by itself, and form its own
opinions it is crucial that they dont lack morals that have been coded into their system. The
moral of a machine can be described as the values of the machine, which will determine the way
it behaves when it is fully autonomous. However, it is important to respect that morals are
complex and are completely subjective to the individuals interpretations. It can be argued that
an individuals morals are a combination of many things the individual has experienced, which
explains why most people behave differently when dealing with an ethical dilemma. Therefore, it
is valid to ask the question; Is it moral for everyone to have the same morals? This is a danger
imposed by trying to code morals into a machine.
Imagine if in the future, Watson is in charge of diagnosing patients and deciding how to treat
who and when to treat who. At one point Watson may be faced with a situation where two
patients, an old man and a young child, need urgent surgery for survival but only one surgery
room is available at the hospital. Who would Watson send for surgery? These are decisions that
even humans would struggle with but if Watson only reasons in the way he is programmed to
reason, he would always come to the same conclusion. It wouldnt be wrong to pick the child
over the old man for a single incident, but what if the child was always picked over the old man?
Would this be morally correct? A scientist from Duke University, Vincent Conitzer, argues that
humans will use their past experience to help with decision making[4]. A doctor who picks the
child the first time might not pick the child the second time he presented with the same ethical
dilemma because he feels guilty. The point is that there is no right answer but if a machine only
reasons in the specific manner it is coded to, it will by nature always pick one answer over the
other, which could lead to discrimination and pose many ethical dilemmas.
Machines will always make the most rational decisions unless they are programed to
understand rights, peoples past actions, peoples motives, and their intentions. Vincent Conitzer
from Duke University has been funded to research a way of integrating ethics into artificial
intelligence. This could lead to a potential solution for the ethical dilemma presented above as
long as each artificially intelligent machine is able to interpret the ethics it is coded with
differently in order to avoid the issue of systemic discrimination.
A possible solution to developing machine ethics would be to use ethics tables. Before
acting, the artificial intelligence could work through the tables and perform the action that
satisfies the most ethics. This would mean that if an action only satisfies act-utilitarian ethics, the
machine will not perform the action. By using this methodology, the artificial intelligence could
be programmed to consider its past actions. Hence, the machine might not act in the same way
twice if presented with the same situation because it could argue that it would be morally
incorrect to make the same decision twice, or that rights of old men in general are violated if the
machine decides to only save the children. This methodology would also ensure that all artificial
intelligent machines make different decisions, since they will all have different past experiences,
like humans.
2. Roboethics
Issue of robot rights
Robot rights are the moral obligations of society towards machines, similar to human rights
or animal rights. Even though this idea may sound far-fetched since currently robots are nowhere
near sentient enough to be treated as a living being, but in the future robots could be built to
resemble humans or animals, by being capable of displaying behaviour that mimic emotions. For
example, it is widely considered animal cruelty to kick a dog, but how about kicking a robot dog,
especially when it looks so similar to the real things?
One may argue that it isnt morally incorrect to kick a robot dog, as rights only apply to
sentient beings capable of having emotions, but it may be morally damaging to the human. A
researcher has conducted experiments in which people are requested to abuse their toys, they are
emotionally disturbed (SOURCE). There was also an online backlash when a video of a
researcher kicking a robotic dog was uploaded. These examples demonstrate the idea that there is
a need to consider moral issues dealing with robots as human well-being is also concerned in this
case.
Furthermore, if intelligent machines are developed to become sentient beings to the extent
that they experience emotions and develop according to their experiences, they should be treated
similarly to humans. Also, treating the machine immorally could lead to an evil robot if the robot
feels the need to defend itself. Humans observe moral obligations and rights to avoid societal
consequences and because most individuals dont feel comfortable treating others unjustly and
making them unhappy((SOURCE). If robots experience the same emotions as humans, then there
should be no reason to treat a robot differently to a human because it would avoid making the
robot unhappy, which would be experienced similar to the way humans experience unhappiness,
and it could avoid the evil development of the robot.
There may be two directions that AI development can take. One is leaving AI devoid of
emotions and robots built as machine-like as possible to get rid of any moral obligation dealing
with robots, emphasizing that AI and robots are just tools to help humans. Another is that if AI
can be capable of having emotions, and human-like robots are built, there need to be laws and
regulations to ensure that interactions between humans and AI satisfy ethics as mistreat of AI can
be perceived as mistreat of human-like beings which may result in harming our moral values.
([Link]
([Link]
[Link])
Threat to human values and dignity
As there are constantly more applications for AI in various economic sectors, it is inevitable
that jobs will be eliminated and some professions become obsolete. Nowadays AI is capable of
replacing human jobs such as telephone customer service providers or stock/forex trading agents.
One can argue that human rights are violated as AI takes over the means for humans to secure
their material well-being. In the near future, Watson may be capable of replacing doctors in
diagnosis, or self-driving car may lead to a new generation of driverless taxis. Even though AI is
necessary for economic advancement, there is a need for governments or relevant institution to
make changes to accommodate job restructuring resulted from AI.
Moreover, there is also the fear of human emotion being compromised with the advancement
of AI. Nowadays, AI is already utilised in telephone-based voice response, and in Japan, a
country which has a large population of elderly citizens, robot nurses have appeared to aid with
the lack of workforce. With increased exposure to human-AI interaction and not human-human
interaction, humans may start to isolate themselves from society, similar to how technology and
social media is reducing face-to-face interactions. Humans may find themselves further alienated
and frustrated by the lack of interaction and exposure to human feelings which AI may lack and
their emotional well-being, or right ethics are violated.
3. Existential risk of AI (Emergence of super intelligence)
Superintelligence, as described by Oxford futurist Nick Bostrom, is an intellect that is much
smarter that the best human brain in practically every field, including scientific creativity,
general wisdom and skills (Bostrom, Nick (2006). How long before superintelligence?.
(Linguistic and Philosophical Investigations 5 (1): 11-30). One of the main controversies when
exploring superintelligence is the concept of an Intelligence Explosion. Intelligence explosion
refers to AI teaching itself to continuously improve its intelligence. This concept will take
superintelligence far beyond human intellectual capacity and control. When a machine has so
much power it is necessary for it to have moral laws to follow. But again a classic dilemma of
moral can be raised here: what is moral for one person may not be moral for another.
Scientists and fictional writers have argued and depicted scenarios where AI can go out of
control because of superintelligence and intelligence explosion, which can result in a scenario
referring to as technological singularity. In this concept the AI would be intelligent and powerful
enough to take over the world and erase human existence, which would violate all human ethics.
This can be interpreted as fictional and superficial but with the advancements of technology,
fiction only seems to be science that is not discovered yet. In the event of technological
singularity, even if the AI system was equipped with morals and programmed to act accordingly,
due to intelligence explosion it might update its morals to justify its actions to fulfill ethics for
robots. For example, instead of maximizing human societys utility, it would try to maximize the
utility for all machines. This would be the case simply because morals are subjective. It might be
argued that for the better good, technological singularity occurs, still it would not be morally
correct to let a machine decide over the lives of billions of humans.
In order to avoid these kinds of scenarios, super intelligent machines must be prevented from
being able to change their moral reasoning. This can be implemented in the form of forced
human control over moral decision making algorithms of AI. To prevent human exploitation, the
responsibility should not be placed on any individual, but an international council responsible to
govern ethics of superintelligence.
IV. Conclusion.
In this paper, we discuss the ethical problems of current AI technologies through exploring some
of its major applications such as IBM Watsons, Intelligent Personal Assistant and self-driving
car. The major ethical issue of current AI technologies lies on its limited ability to make moral
decision due to constraint of current technology as well as the lack of legislation regarding AI
due its limited application. However, scientists believe that AI has the potential to develop much
more advanced, hence the possible ethical issues which may stem from future AI is also
discussed in this paper. With thoughtfulness and discern in AI development, we believe that
potential risk and problems can be minimized and AI will be a great breakthrough in human
endeavor.

[1] [Link]
[2] [Link]
business-consulting-arm/
[3] [Link]
[4] [Link]

You might also like