0% found this document useful (0 votes)
105 views20 pages

SOCHUM

Uploaded by

sajisivasri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views20 pages

SOCHUM

Uploaded by

sajisivasri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

LETTER FROM THE EXECUTIVE BOARD

It is our distinct pleasure to welcome you to the SOCHUM committee at HUSMUN’24 along with the
Organizing Committee. We dearly hope that this will be the best MUN conference you will ever
attend.

This background guide is just a glimpse of the agenda in hand and in no way is exhaustive or intended
to replace individual research.The delegates must remember that every country has a different foreign
policy under which this simulation must be seen with. The Executive Board encourages you to
research further about this situation, position of the member states and the intricate details of the
political situation that has developed.

The delegates must know that every Country has its own views on this agenda to be followed during
the debate. We as the EB expect the delegates to move the committee in a formal way and we will be
here to moderate the proceedings. Feel free to drop your queries to the Executive Board.

If this is your first MUN, it’s highly encouraged that you should contact a friend or acquaintance with
experience and come to terms with the intricacy of the committee, agenda and the procedure of an
MUN conference.

Agenda: The impact of artificial intelligence (AI) and robotics on human


rights and social development.

Chairperson: Atheeb Hussain


Vice Chairperson: Aaryan Vishnu Anand
Director: Janani Hari
Introduction

Artificial intelligence (AI) is the simulation of human intelligence in machines that can reason, learn,
solve problems, and automate processes. Artificial intelligence is quickly finding its way into our
daily lives through the use of personal devices, healthcare, banking, transportation, and
manufacturing.

Applications

Machine learning: Enables computers to learn from data without explicit programming, powering
tasks like image recognition, spam filtering, and personalized recommendations.

Natural language processing (NLP): Allows computers to understand and respond to human
language, enabling applications like chatbots, voice assistants, and machine translation.

Computer vision: Enables computers to interpret visual information, powering applications like
facial recognition, self-driving cars, and medical image analysis.

Robotics: AI plays a crucial role in controlling and directing robots, allowing them to perform tasks
autonomously or semi-autonomously.

Do human-beings really need artificial intelligence?

Does human society actually require AI? It varies. Yes, it is if a person chooses to work more quickly
and efficiently and to work continuously without taking breaks. But it isn't if people are content to live
in harmony with nature and don't feel the need to overthrow the natural order. Humanity has always
sought for faster, easier, more efficient, and more convenient ways to do tasks. As a result, the urge
for further progress drives people to search for novel and improved methods of completing tasks.
History demonstrates this tendency. When homo sapiens first found that tools might ease many of the
difficulties of daily life, they also realized that by inventing tools, they could perform tasks more
effectively, quickly, and intelligently.All is well and should be well for these AI, but as human
technology continued to advance in the early 1900s, Aldous Huxley warned in his book Brave New
World that humanity might enter a future in which the development of genetic technology would
result in the creation of monsters or superhumans.

Additionally, modern AI is making inroads into the healthcare sector by helping physicians diagnose
patients, identify the causes of illnesses, recommend different courses of action, perform surgeries,
and determine whether a patient's condition is life-threatening. Surgeons at Washington's Children's
National Medical Center recently conducted research that successfully demonstrated surgery using an
autonomous robot. The scientists said that after supervising the robot to do soft-tissue surgery and sew
a pig's colon together, the robot completed the procedure more skillfully than a human surgeon . It
highlights how robotically-assisted surgery can improve the skills of doctors performing open surgery
while overcoming the constraints of previously available minimally-invasive surgical treatments.
Above all, we witness the most well-known applications of AI, such as autonomous vehicles (like
drones and self-driving cars), medical diagnosis, art creation, gaming (like Go or Chess), online
assistants (like Siri), image recognition in photos, spam filtering, flight delay prediction, etc. These
have all made life so much more convenient and easy for humans that we now take them for granted.
Even if AI is not strictly necessary, it has become indispensable, and without it, our world would be in
disarray in many ways right now.

Negative impact

Questions have been asked: Human labor will become obsolete as artificial intelligence advances
since everything can be completed mechanically. Will people eventually grow lazy and deteriorate to
the point where we revert to our most basic state of existence? Since evolution takes eons to complete,
we won't be able to observe humanity's regression. But what if AI grows to such a degree of strength
that it can defy commands from its master, humanity, and program itself to rule?

● There will be a significant societal shift that upends our way of life in the human community.
To survive, humanity must work hard, but thanks to artificial intelligence (AI), we can easily
teach a computer to perform a task for us without ever picking up a tool. Artificial
intelligence (AI) will eventually replace face-to-face meetings as the primary means of
exchanging ideas, hence decreasing human connection. With the elimination of the necessity
for face-to-face meetings, artificial intelligence will act as a mediator between individuals.

● The second issue is unemployment when machinery replaces many jobs. Many vehicle
assembly lines nowadays are powered by robots and machinery, which has resulted in the loss
of jobs for traditional people. Store clerks in supermarkets won't be necessary anymore
because computerized devices can replace human work.

● As AI investors will receive the lion's share of profits, wealth disparity will be generated.
There will be a greater disparity between the rich and the poor. There will be additional
evidence of the alleged "M" shape wealth distribution.

● As AI is educated and taught to perform a task, new concerns arise not just in the social sense
but also in AI itself as the technology may someday grow to such an extent that humans are
unable to manage it, leading to unexpected issues and repercussions. It speaks of AI's ability
to operate autonomously on its own path, disregarding instructions from a human controller,
once it has been loaded with all necessary algorithms.
● AI may be created by human masters with racial bigotry or egocentric orientation with the
intention of harming particular individuals or objects. For example, the UN voted to restrict
the use of nuclear power because of concern that it could be used indiscriminately to wipe off
humanity or to target specific racial or geographic groups in order to establish dominance. AI
has the ability to target specific races or specifically programmed items in order to carry out
the programmers' orders to destroy, potentially causing a global catastrophe.

Positive impact

However, there are also a lot of advantages for people, particularly in the area of healthcare. AI
endows computers with the ability to reason, learn, and use logic. When scientists, physicians,
mathematicians, engineers, and medical researchers collaborate, they can create artificial intelligence
(AI) targeted at medical diagnosis and treatment, providing dependable and secure health-care
delivery systems. Robotic systems can be developed to do some delicate medical procedures with
precision, in addition to digital computers helping with analysis as health professors and medical
researchers hunt for new and effective ways to treat ailments. This illustrates how AI is improving
healthcare

Fast and accurate diagnostics

● IBM's Watson computer has been used to diagnose with the fascinating result. Loading the
data to the computer will instantly get AI's diagnosis. AI can also provide various ways of
treatment for physicians to consider. The procedure is something like this. To load the digital
results of physical examination to the computer that will consider all possibilities and
automatically diagnose whether or not the patient suffers from some deficiencies and illness
and even suggest various kinds of available treatment.

Socially therapeutic robots

● Pets are recommended to senior citizens to ease their tension and reduce blood pressure,
anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to
accompany those lonely old folks, even to help do some house chores. Therapeutic robots and
the socially assistive robot technology help improve the quality of life for seniors and
physically challenged [12].
Reduce errors related to human fatigue

● Human error at workforce is inevitable and often costly, the greater the level of fatigue, the
higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or
emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

● AI-based surgical procedures have been available for people to choose. Although this AI still
needs to be operated by the health professionals, it can complete the work with less damage to
the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform
minimally invasive procedures, is available in most of the hospitals now. These systems
enable a degree of precision and accuracy far greater than the procedures done manually. The
less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the
patients.

Improved radiology

● The first computed tomography scanners were introduced in 1971. The first magnetic
resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s,
cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new
algorithms to detect specific diseases as well as to analyze the results of scans . All those are
the contributions of the technology of AI.

Virtual presence

● The virtual presence technology can enable a distant diagnosis of the diseases. The patient
does not have to leave his/her bed but using a remote presence robot, doctors can check the
patients without actually being there. Health professionals can move around and interact
almost as effectively as if they were present. This allows specialists to assist patients who are
unable to travel.

Artificial intelligence ethics must be developed

The study of bioethics is concerned with the interactions between living things. Bioethics emphasizes
the right and good in biospheres and can be divided into three main categories: bioethics in the social
context, which is the relationship between humans; bioethics in the health context, which is the
relationship between doctors and patients; and bioethics in the environmental context, which is the
relationship between humans and nature, including animal ethics, land ethics, ecological ethics, etc.
The relationships that occur inside and among natural existences are the focus of all of them.

As AI develops, humans will have a new obstacle in trying to relate to something that is not naturally
occurring. A typical topic of discussion in bioethics is the link between humans and their surroundings
as components of natural phenomena. However, men now have to contend with AI, an artificial,
synthetic, and man-made entity. Despite having created a great deal, humans have never been required
to consider the moral implications of their own creations. AI lacks emotion and personality in and of
itself. AI engineers have realized the importance of giving the AI ability to discern so that it will
avoid any deviated activities causing unintended harm. From this perspective, we understand that AI
can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make
sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the
human race. He said that once humans develop AI, it may take off on its own and redesign itself at an
ever-increasing rate . Humans, who are limited by slow biological evolution, could not compete and
would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will
pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior
such as acquiring resources or protecting itself from being shut down, and it might harm humanity .

The query is: Should we consider bioethics when producing a human-made good that lacks
biovitality? Is it possible for a computer to possess consciousness, mind, and mental state in the same
way that humans do? Is it possible for a machine to be sentient and hence have rights? Can a machine
injure someone on purpose? Regulations need to be considered a bioethical requirement for the
development of AI.

Research has demonstrated that AI can mirror the very biases that people have worked so hard to
eradicate. AI has enormous potential to improve all facets of life, including business, employment,
healthcare, and even security, when it becomes "truly ubiquitous." Politico Europe's AI correspondent,
Janosch Delcker, addressed the technology's concerns by stating: "I don't think AI will ever be free of
bias, at least not as long as we stick to machine learning as we know it today." "I think it's vitally
important to acknowledge the existence of those biases and to have policymakers work to reduce
them." [17]. . The High-Level Expert Group on AI of the European Union presented Ethics
Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable,
and unbiased. Three emphases are given:
● Lawful-respecting all applicable laws and regulations
● Ethical-respecting ethical principles and values
● Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while
taking into account its social environment

MORE ON AI

The AI engine that powers ChatGPT is one prominent example, but there are plenty more.

An artificial intelligence system is built on algorithms. A conventional algorithm is a collection of


guidelines, or rules, that a machine or computer must follow in order to respond to queries and resolve
issues.One kind of artificial intelligence (AI) system is machine learning, which generates its own
instructions from the data it has been "trained" on. It then applies these instructions to produce a
task-specific answer. In a sense, the program writes itself. Machine learning has made recent strides in
AI.Certain machine learning systems keep "learning" while they are being used for a specific job
because of inputs from the surrounding environment.

Unlike straightforward rule-based algorithms, the AI system does not always react to the same input
due to the nature of machine learning. Unpredictability will characterize the system.One further
difficulty is that machine learning systems are frequently a "black box." This means that it can be
exceedingly challenging to retroactively explain why a system produced a specific result, even in
cases when the inputs are known.

How could AI be deployed in armed conflicts?

Armed forces are investing heavily in AI and there are already examples of AI being deployed on the
battlefield to inform military operations or as part of weapon systems.

The ICRC has highlighted three areas in which AI is being developed for use by armed actors in
warfare, which raise significant questions from a humanitarian perspective:

● Integration in weapon systems, particularly autonomous weapon systems


● Use in cyber and information operations
● Underpinning military ‘decision support systems’

When it comes to the application of AI in warfare, autonomous weapon systems have drawn the
greatest interest. For instance, there have been worries expressed that AI might be used to
immediately start a strike against a person or a car.
The International Committee of the Red Cross (ICRC) has called for states to enact new international
regulations that would outlaw some autonomous weapons and restrict the use of others, including
AI-controlled weaponry.

The dangers of using AI in decision support systems, cyber and information operations, and
cyberspace have received comparatively less attention; for more details, refer to the sections below.

If the international community does not adopt a human-centered approach to the use of AI in armed
conflict, all these applications may cause harm to people.

How is AI used to inform military decisions?

Any computerized tool that may generate analyses to support military decision-making through the
usage of AI-based software is considered a decision support system.

To identify persons or objects, evaluate behavioral patterns, offer suggestions for military activities, or
even forecast future events or circumstances, these systems gather, examine, and integrate data
sources.For instance, by analyzing drone footage and other data sources, an AI image recognition
system may be utilized to detect military objects and suggest targets for the armed forces.Put
differently, these artificial intelligence systems can help determine what or who to strike and when.
Even more concerning are hints that artificial intelligence (AI) systems may influence military
judgments about the deployment of nuclear weapons.

Some contend that the application of decision support systems can assist human decision-making in a
way that reduces dangers to civilians and facilitates adherence to international humanitarian
law.Others warn that relying too much on AI-generated results could jeopardize human judgment in
legal decisions, violate international humanitarian law, and endanger civilian protection. This is
especially true given how opaque and biased many machine learning systems available today are.

How could AI be used in cyber and information warfare?

AI is expected to change both how actors defend against and launch cyber-attacks.

For example, systems with AI and machine learning capabilities could automatically search for
vulnerabilities in enemy systems to exploit, while also detecting weaknesses in their own systems.
When coming under attack, they could simultaneously and automatically launch counter-attacks.

These types of developments could increase the scale of cyber-attacks, while also changing their
nature and severity, especially in terms of adverse impact on civilians and civilian infrastructure.
Information warfare has long been a part of conflicts. But the digital battlefield and AI have changed
how information and disinformation is spread and how disinformation is created.AI-enabled systems
have been widely used to produce fake content – text, audio, photos and video – which is increasingly
difficult to distinguish from genuine information.

Not all forms of information warfare involve AI and machine learning, but these technologies seem
set to change the nature and scale of how information is manipulated, as well as the real-world
consequences.

What are ICRC’s concerns around AI and machine learning in armed conflict?

The use of AI and machine learning in armed conflict has important humanitarian, legal, ethical and
security implications.

With rapid developments in AI being integrated into military systems, it is crucial that states address
specific risks for people affected by armed conflict.

Although there are a wide range of implications to consider, specific risks include the following:

An increase in the dangers posed by autonomous weapons;

Greater harm to civilians and civilian infrastructure from cyber operations and information warfare;

A negative impact on the quality of human decision-making in military settings.

It is important that states preserve effective human control and judgement in the use of AI, including
machine learning, for tasks or decisions that could have serious consequences on human life.

Legal obligations and ethical responsibilities in war must not be outsourced to machines and software.

What is the ICRC’s message for the international community?

It is critical that the international community takes a genuinely human-centred approach to the
development and use of AI in places affected by conflict.

This starts with considering the obligations and responsibilities of humans and what is required to
ensure that the use of these technologies is compatible with international law, as well as societal and
ethical values.
From our perspective, conversations around military uses of AI and machine learning, and any
additional rules, regulations or limits that are developed, need to reflect and strengthen the existing
obligations under international law, in particular international humanitarian law.

Convergence of AI and Robotics:

AI and robotics are increasingly converging, creating powerful systems that can learn, adapt, and
interact with the physical world. This convergence is fueling significant advancements various fields,
leading to innovations such as self-driving cars, smart factories, and intelligent prosthetics.

Establishing Global Dialogue on the Ethics of Artificial Intelligence: the Role of


UNESCO (Additional topic)

The world must ensure that new technologies, especially those based on AI, are used for the good of our
societies and their sustainable development. It should regulate AI developments and applications so that
they conform to the fundamental rights that frame our democratic horizon.

Many actors—businesses, research centres, science academies, United Nations Member States,
international organizations and civil society associations—are calling for an ethical framework for AI
development. While there is a growing understanding of the issues, related initiatives need more robust
coordination. This issue is global, and reflection on it must take place at the global level so as to avoid a
‘pick-and-choose’ approach to ethics. Furthermore, an inclusive, global approach, with the participation of
United Nations funds, agencies and programmes, is required if we are to find ways of harnessing AI for
sustainable development.

UNESCO will be a full and active participant in this global conversation. Our organization has many years
of experience in the ethics of science and technology. Our advisory bodies have already produced
numerous reports and declarations, including on robotics, such as the Report of the World Commission on
the Ethics of Scientific Knowledge and Technology on Robotics Ethics in 2017. The advisory bodies also
have experience in developing normative instruments, including the Universal Declaration on the Human
Genome and Human Rights in 1997 and the Universal Declaration on Bioethics and Human Rights in
2005.

UNESCO priorities must also guide our international action in this area. It is essential to ensure that Africa
fully participates in transformations related to AI, not only as a beneficiary but also upstream, contributing
directly to its development. In terms of gender equality, we must fight against the biases in our societies to
guarantee that they are not reproduced in AI applications. Finally, we must empower young people by
providing them with the skills they need for life in the twenty-first century for integration in a changing
labour market.

UNESCO also has a key role to play in bridging existing divides, which AI is likely to deepen. Eliminating
fragmentation between countries and genders, but also in terms of resources and knowledge, could enable
more people to contribute to the digital transformation underway.

UNESCO, with its humanist mission and international dimension, involving researchers, philosophers,
programmers, policymakers, and private sector and civil society representatives, is the natural home for
debate on such ethical issues. Beginning later this year, UNESCO will organize debates on AI in several
regions of the world, bringing together specialists from a wide range of backgrounds and expertise. The
first debate, which took place in Marrakech, Morocco, on 12 December 2018, focused on AI and Africa. A
second international conference will take place at the UNESCO headquarters in Paris in the first half of
2019. This dialogue could eventually lead, with the agreement of Member States, to the definition of key
ethical principles to accompany developments in AI.

UNESCO, as a universal forum where everyone’s voice is heard and respected, is performing its role to the
fullest, informing the global debate on the major transformations of our time while establishing principles
to ensure that technological advances are used to serve the common good. The promise of AI and its
underlying ethical issues are fascinating, and our responses to these challenges will transform the world as
we know it.

Together, we must find the best solutions to ensure that the development of AI is an opportunity for
humanity, as it is our generation’s responsibility to pass down to the next a society that is more just, more
peaceful and more prosperous.

The Complex Relationship between AI, Robotics, and their Impact on Human Rights
and Social Development

The rapid advancement of artificial intelligence (AI) and robotics presents both opportunities and
challenges for human rights and social development. Their intertwined relationship creates a complex
landscape with far-reaching consequences.

AI and robotics can positively impact human rights and social development in various
ways:

Improved decision-making: AI algorithms can analyze vast amounts of data to inform policy
decisions, potentially leading to more efficient and effective resource allocation in areas like
healthcare, education, and disaster response.

Enhanced accessibility and inclusion: AI-powered technologies can assist individuals with disabilities,
provide educational resources to underserved communities, and promote social inclusion.
Increased efficiency and productivity: Robotics can automate repetitive and dangerous tasks, freeing
up human resources for creative endeavors and contributing to economic growth.

Advancements in healthcare: AI and robotics can assist in medical diagnosis, surgery, and
personalized medicine, improving patient care and outcomes.

Environmental benefits: AI and robotics can be utilized for environmental monitoring, sustainable
resource management, and development of clean energy technologies.

However, the same technologies also raise significant concerns regarding human rights
and social development:

Privacy violations: AI systems collect and analyze vast amounts of personal data, potentially leading
to privacy violations and discrimination.

Algorithmic bias: Biased algorithms can perpetuate existing inequalities and lead to unfair outcomes
in areas like employment, loan approvals, and criminal justice.
Job displacement: Automation through robots could lead to significant job losses, particularly in
manual labor sectors, potentially exacerbating social disparities.

Weaponization of AI: Autonomous weapons powered by AI raise ethical concerns regarding


accountability and the potential for misuse in warfare.

The widening digital divide: Unequal access to AI and robotics technologies can exacerbate the
existing digital divide, further disadvantagering marginalised communities.

Navigating this complex relationship requires careful consideration of the following:

Ethical frameworks: Establishing ethical guidelines and principles for the development and
deployment of AI and robotics is crucial to ensure their responsible use and mitigate potential harms.

Transparency and accountability: Developers and users of AI and robotics technologies need to be
transparent about their operations and algorithms, promoting accountability and public trust.

Human control and oversight: While AI and robotics have advanced capabilities, human control and
oversight are essential to ensure their safe and responsible operation.

Regulation and governance: Developing effective legal frameworks and regulations for AI and
robotics is necessary to address emerging challenges and protect human rights.

Investment in education and reskilling: Investing in education and retraining programs is crucial to
help individuals adapt to changing job markets and benefit from the opportunities presented by AI and
robotics.
Human Rights:

The issue of what the limits should be on artificial intelligence and emerging technologies is a
pressing one for society, governments, and the private sector. AI has the potential to improve strategic
foresight, democratize access to knowledge, and increase capacity for processing vast amounts of
information, but regulation is necessary to ensure that the benefits outweigh the risks. Two schools of
thought are shaping the current development of AI regulation: risk-based and human
rights-embedded.

In this committee one of your main objectives is to identify the human rights-based issues, solutions
and other thoughts which can improve human rights and social development.

Delegates need to be aware of the issues that can be caused by AI which is the fact it can be used as a
tool of discrimination the issue of discrimination and systemic racism has taken increasing space in
political debates about technological growth. Article 2 of the UDHR and Article 2 of the ICCPR both
articulate individual entitlement to all rights and freedoms without discrimination. In 2015, Google
Photos, which is considered an advanced recognition software, categorized a photo of two Black
people as a picture of gorillas. When keywords such as ‘Black girls’ were inputted into the Google
search bar, the algorithm showed sexually explicit material in response. Researchers have also found
that an algorithm that identifies which patients need additional medical care undervalued the medical
needs of Black patients.

Technology can be a source of unemployment


The right to work and protection against unemployment is guaranteed under Article 23 of UDHR,
Article 6 of ICESCR, and Article 1(2) of the ILO. Though the rapid increase of AI has transformed
existing businesses and personal lives by improving the efficiency of machinery and services, such
change has also birthed an era of unemployment due to the displacement of human labour.

The COVID-19 pandemic has already impacted millions of jobs, and a new wave of AI revolutions
may further aggravate the situation. By increasingly introducing AI in different job sectors, it seems
that the poor will become poorer and the rich will become richer. Indeed, AI represents a new form of
capitalism that strives for profit without the creation of new jobs; instead, a human workforce is
perceived as a barrier to growth. There is thus an urgent need to address the consequences of AI on
social and economic rights, through the development of a techno-social governance system that may
protect the employment rights of humans in an AI era.

The United Nations can play a central role in convening key stakeholders and advising on progress.
The starting point should be the harms that people experience and will likely experience, and
regulation should be grounded in respect for human rights.

Another important area which prevails is the Vulnerabilities of Cyber security.

The issue and its significance


A RAND perspectives report Osoba and Welser (2017) highlights various security issues related to
AI, for example, fully automated decision-making leading to costly errors and fatalities; the use of AI
weapons without human mediation; issues related to AI vulnerabilities in cyber security; how the
application of AI to surveillance or cyber security for national security opens a new attack vector
based on ‘data diet vulnerability’; the use of network intervention methods by foreign-deployed AI;
larger scale and more strategic version of current advanced targeting of political messages on social
media etc. Cyber security vulnerabilities also pose a significant threat as they are often hidden and
revealed only too late (after the damage is caused).

Gaps and challenges


Effectively addressing such issues requires proactive and responsive use of cybersecurity policies,
mechanisms and tools by developers and users at all stages – design and implementation and use. But
this is often not the case in practice and is a real challenge As a SHERPA report outlines, “When
designing systems that use machine learning models, engineers should carefully consider their choice
of a particular architecture, based on understanding of potential attacks and on clear, reasoned trade-
off decisions between model complexity, explainability, and robustness”

Many of the issues have wide-ranging societal and human rights implications. They affect a spectrum
of human rights principles: data protection, equality, freedoms, human autonomy and
self-determination of the individual, human dignity, human safety, informed consent, integrity, justice
and equity, non-discrimination, privacy and self-determination. As AI technologies works closely with
vast amounts of data, they will have cross-over and multiplicative effects that exacerbate legal and
human rights issues related to them and impacts on individuals Rodrigues (2019). Such issues will
amplify if industry develops applications and systems without paying attention early-on in the design
and development process to the potential impacts of such technologies – whether on human rights,
ethical and societal values.

A proactive approach is needed to anticipate and mitigate potential risks associated with the
development of AI. This involves fostering international cooperation to share best practices, research
findings, and regulatory frameworks. Establishing ethical standards and norms at the global level can
serve as a foundation for responsible AI development, promoting inclusivity and respect for human
dignity.

As Model United Nations delegates, it is crucial to advocate for comprehensive and rights-centric AI
policies. Striving for a future where technological innovation aligns with the principles of human
rights is not only an ethical imperative but also a prerequisite for sustainable global development.

Social Development:

AI As a tool for Social Development


Artificial Intelligence (AI) holds the promise of addressing some of the most complex global social
challenges, encompassing all 17 of the UN's sustainable development goals and potentially benefiting
millions in both developed and developing nations. This represents a crucial area for AI and
associated committees to actively engage in. Practical examples of AI applications are already making
an impact in approximately one-third of these cases, ranging from cancer diagnosis to assisting the
visually impaired, combating online sexual exploitation, and supporting disaster-relief efforts.
However, it's important to note that AI is just one element in a comprehensive toolkit for tackling
societal issues. Current limitations, such as challenges in data accessibility and a shortage of AI talent,
hinder its broader application for social good.

Crisis Management
In a crisis technology plays a pivotal role by providing situational awareness to inform life-saving
decisions. This includes decisions like evacuating dangerous areas after an earthquake or strategically
placing essential resources such as medicine, food, clean water, and shelter. Leveraging data from
citizens in crisis zones, particularly through social media, enables rescuers to formulate immediate
and long-term rescue strategies.

Nevertheless, challenges arise due to the sheer volume of available data, necessitating high-quality
filtering systems to prevent the use of inaccurate information that could misguide humanitarian aid
efforts. Building trust among humanitarian responders is crucial for the adoption of AI technology in
the field, addressing concerns about the specificity of information. Machine learning, which refines
how AI learns from algorithms and data, offers a solution for extracting key information from social
media messages.

Ethical considerations
The rapid growth of AI globally has presented numerous opportunities, from enhancing healthcare
diagnoses to fostering human connections through social media and streamlining tasks through
automation. However, this rapid evolution raises profound ethical concerns, including the potential for
AI systems to embed biases, contribute to climate degradation, and pose threats to human rights.
These risks compound existing inequalities, particularly affecting marginalized groups.

In the realm of artificial intelligence, ethical considerations are paramount, given that these
transformative technologies are reshaping the way we work, interact, and live. Without robust ethical
guidelines, AI risks perpetuating real-world biases, fostering discrimination, and undermining
fundamental human rights and freedoms, thus necessitating careful navigation and oversight.

Healthcare
Healthcare systems around the world face significant challenges in achieving the ‘quadruple aim’ for
healthcare: improve population health, improve the patient's experience of care, enhance caregiver
experience and reduce the rising cost of care Ageing populations, growing burden of chronic diseases
and rising costs of healthcare globally are challenging governments, payers, regulators and providers
to innovate and transform models of healthcare delivery. Moreover, against a backdrop now catalysed
by the global pandemic, healthcare systems find themselves challenged to ‘perform’ (deliver effective,
high-quality care) and ‘transform’ care at scale by leveraging real-world data driven insights directly
into patient care.

The pandemic has also highlighted the shortages in healthcare workforce and inequities in the access
to care, The application of technology and artificial intelligence (AI) in healthcare has the potential to
address some of these supply-and-demand challenges. The increasing availability of multi-modal data
(genomics, economic, demographic, clinical and phenotypic) coupled with technology innovations in
mobile, internet of things, computing power and data security herald a moment of convergence
between healthcare and technology to fundamentally transform models of healthcare delivery through
AI-augmented healthcare systems.
In particular, cloud computing is enabling the transition of effective and safe AI systems into
mainstream healthcare delivery. Cloud computing is providing the computing capacity for the analysis
of considerably large amounts of data, at higher speeds and lower costs compared with historic ‘on
premises’ infrastructure of healthcare organisations.

Conclusion

High-Level Advisory Body on Artificial Intelligence


The Global AI Imperative
Globally coordinated AI governance is the only way to harness AI for humanity while addressing its
risks and uncertainties as AI-related services, algorithms, computing capacity and expertise become
more widespread internationally.

The UN's Response


To foster a globally inclusive approach, the UN Secretary-General is convening a multi-stakeholder
High-level Advisory Body on AI to undertake analysis and advance recommendations for the
international governance of AI.

Calling for Interdisciplinary Expertise


Bringing together up to 32 experts in relevant disciplines from around the world, the Body will offer
diverse perspectives and options on how AI can be governed for the common good, aligning
internationally interoperable governance with human rights and the Sustainable Development Goals.

A Multistakeholder, Networked Approach


The Body, which will comprise experts from government, private sector and civil society, will engage
and consult widely with existing and emerging initiatives and international organizations to bridge
perspectives across stakeholder groups and networks.

Supporting the Body


The UN is calling for support to the Body’s operations and the secretariat based in the Office of the
Secretary-General’s Envoy on Technology (OSET). Through their support, contributors will
strengthen stakeholder cooperation on governing AI in the face of pressing technical breakthroughs,
and thereby contribute to better-governed AI globally.
In conclusion, the advent of artificial intelligence (AI) necessitates a profound consideration of
bioethics principles within the framework of our global society. Emphasizing beneficence, value
upholding, lucidity, and accountability, we acknowledge the transcendental nature of AI bioethics,
aiming to bridge its inherent inability to empathize. It is essential to heed the cautionary words of
Joseph Weizenbaum, asserting that pivotal decisions should not be delegated to computers due to AI's
inherent lack of human qualities such as compassion and wisdom. In the realm of AI, bioethics is not
a mere calculation but a conscientization process, recognizing that, despite advancements, AI remains
a machine and a tool devoid of authentic human emotions. As articulated in the White Paper on AI –
A European approach to excellence and trust, AI must be developed cautiously, ensuring its alignment
with human rights and subjecting high-risk AI to rigorous testing and certification before entering our
global market. This underscores the imperative that AI, while a transformative force, must always
serve humanity, upholding our fundamental rights and values.

QARMA

Human rights

How will AI and robotics impact individual privacy rights and human autonomy, particularly in areas
like data collection, surveillance, and decision-making?

What safeguards are needed to prevent AI and robotics from perpetuating or amplifying existing
forms of discrimination based on race, gender, ability, or other factors?

How will legal frameworks adapt to ensure accountability for AI and robotic systems in cases of harm
or bias?

How will automation and the adoption of AI and robotics impact employment opportunities and
income inequality, and what measures can be taken to support workers affected by these changes?

How will AI and robotics be used to monitor and restrict freedom of expression and assembly, and
what mechanisms can be implemented to protect these fundamental rights?

Social Development:

How can education systems be adapted to prepare individuals for jobs and opportunities in the age of
AI and robotics?

How can we ensure equitable access to AI and robotic technologies across different socioeconomic
groups and geographical regions?

How will AI and robotics affect existing social safety nets and support systems, particularly for
vulnerable populations?
How will AI and robotics shape urban development and infrastructure needs, and how can these
technologies be used to create more sustainable and resilient cities?

What ethical frameworks and guidelines should be established to ensure the responsible development
and deployment of AI and robotics for the benefit of society?

Additional Questions:

What role can international cooperation and collaboration play in addressing the global challenges and
opportunities presented by AI and robotics?

How can we encourage public participation and debate on the ethical and social implications of AI
and robotics?

What research and development initiatives are needed to mitigate the potential negative impacts and
maximize the positive potential of AI and robotics?

How can we ensure that AI and robotics are used to promote inclusivity, diversity, and human
well-being?

What are the ethical implications and dangers of developing and deploying autonomous weapons
systems with lethal capabilities?

How can we ensure human control and accountability in the use of military AI and robotics?

How can we mitigate the risks of AI-powered cyberattacks and ensure responsible use of these
technologies in the context of international conflict?

How can we prevent an AI arms race and manage the potential for increased militarization and
escalation of conflict fueled by advanced technologies?

How can we ensure transparency and verification of AI and robotic capabilities in the context of arms
control agreements and international law?

How can AI and robotics be used to support post-conflict reconstruction, peacebuilding efforts, and
humanitarian assistance?
Additional Resources:

Human rights:

United Nations Human Rights Office:


https://www.ohchr.org/en/statements/2023/07/artificial-intelligence-must-be-grounded-human-rights-s
ays-high-commissioner

World Economic Forum:


https://centres.weforum.org/centre-for-the-fourth-industrial-revolution/home

Future of Life Institute: https://futureoflife.org/

The Ethics of Artificial Intelligence: https://plato.stanford.edu/entries/ethics-ai/

Additional +

1. Kaplan A, Haenlein M. Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations,
illustrations, and implications of artificial intelligence. Business Horizons. 2019;62:15–25. [Google
Scholar]

2. Russell SJ, Norvig P. Artificial Intelligence: A Modern Approach. Upper Saddle River, New Jersey:
Prentice Hall; 2009. [Google Scholar]

3. Roger C. Schank.Where's the AI. AI Magazine. 1991;12:38. [Google Scholar]

4. Jerry K. Artificial Intelligence – what everyone needs to know. New York: Oxford University Press;
2016. [Google Scholar]

5. Nilsson JN. Principles of artificial intelligence. Palo California: Morgan Kaufmann Publishers;
1980. [Google Scholar]

6. Nils N. Artificial Intelligence: A New Synthesis. Morgan Kaufmann; 1998. [Google Scholar]

7. Dina B. “Microsoft develops AI to help cancer doctors find the right treatment” in Bloomberg
News. 2016 [Google Scholar]

8. Meera S. Are autonomous Robots your next surgeons CNN Cable News Network. 2016 [Google
Scholar]

9. Jacob R. Thinking machines: The search for artificial intelligence. Distillations. 2016;2:14–23.
[Google Scholar]

10. Joseph W. Computer Power and Human Reason from Judgement to Calculation. San Francisco:
W H Freeman Publishing; 1976. [Google Scholar]
11. Rory CJ. Stephen Hawking warns artificial intelligence could end mankind BBC News Wikipedia,
the Free Encyclopedia on Artificial Intelligence. 2014. [Last accessed on 2019 Jun 23]. Available
from: https://enwikipediaorg/wiki/Artifical_Intelligence .

12. Scoping study on the emerging use of Artificial Intelligence (AI) and robotics in social care
published by Skills for Care. [Last accessed on 2019 Aug 15]. Available from: wwwskillsforcareorguk
.

13. Beth Kindig, a Technology Analyst published in Beth. Tchnology. 2020. [Last accessed on 30 Mar
2020]. Available from:
https://wwwforbescom/sites/bethkindig/2020/01/31/5-soon-t0-be-trends-in-artificial-intelligence-and-
deep-learning/

14. Nature News, 24 January 2020. The battle for ethical AI at the world's biggest machine-learning
conference by Elizabeth Gibney. [Last accessed on 2020 Apr 11]. Available from:
https://www.nature.com/articles/d41586-020-00160-y . [PubMed]

15. Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create
thinking machines pose a threat to our very existence. Interview on BBC on Dec 2, 2014. Noted by
Rory CellanJones. [Google Scholar]

16. Bostrom Nick: Superintelligence: paths, dangers, strategies. Keith Mansfield: Oxford University
Press; 2014. [Google Scholar]

17. Delcker J. Politico Europe's artificial intelligence correspondent told DW News in DW News on
Black Box of Artificial Intelligence. 2018. [Last accessed on 2019 Nov 18]. Available from
https://mdwcom/en/can-ai-be-free-of-bias/a-43910804 .

18. European Commission on Ethical Guidelines for Trustworthy AI. The High-Level Expert Group
on AI presented this guideline which stated three requirements: lawful, ethical and robust [Google
Scholar]

19. Nick B, Yudkowsky E. The Ethics of Artificial Intelligence. In: Keith Frankish, William Ramsey.,
editors. Cambridge Handbook of Artificial Intelligence. New York: Cambridge University Press;
2014. [Google Scholar]

20. Quoted from Nathan Strout: The Intelligence Community is developing its own AI ethics on
Artificial Intelligence Newsletter. 2020. [Last accessed on 2020 May 21]. Available from:
https://wwwc4isrnetcom/artificial-intelligence/2020/03/06/the-intelligence-community-is-developing-
its-own-ai-ethics/

21. Von der Leyen. the President of European Commission unveiled EU's plans to regulate AI on Feb
19. 2020. [Last accessed on 2020 May 08]. at
www.dw.com/en/european-union-unveils-plan-to-regulate-ai/a-52429426 .

You might also like