0% found this document useful (0 votes)
39 views22 pages

The Legal and Moral Landscape of Military Artificial Intelligence: Fixing The Accountability Gap Between Man and Machine

The paper explores the ethical and legal accountability gap in military artificial intelligence (AI), particularly concerning autonomous weapons systems (AWS). It highlights the complexities of attributing responsibility for actions taken by AI in warfare and proposes a framework to address these challenges, including the need for human oversight and alignment with international humanitarian law. The authors aim to contribute to the discourse on military ethics and the implications of AI in modern warfare.

Uploaded by

khankashir158
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views22 pages

The Legal and Moral Landscape of Military Artificial Intelligence: Fixing The Accountability Gap Between Man and Machine

The paper explores the ethical and legal accountability gap in military artificial intelligence (AI), particularly concerning autonomous weapons systems (AWS). It highlights the complexities of attributing responsibility for actions taken by AI in warfare and proposes a framework to address these challenges, including the need for human oversight and alignment with international humanitarian law. The authors aim to contribute to the discourse on military ethics and the implications of AI in modern warfare.

Uploaded by

khankashir158
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

See discussions, stats, and author profiles for this publication at: [Link]

net/publication/387755341

The legal and moral landscape of military artificial intelligence: Fixing the
accountability gap between man and machine

Article in Revue Internationale de Droit Pénal · January 2025

CITATIONS READS

0 30

2 authors:

Petrus Cornelius Bester Sonja Els


Stellenbosch University Stellenbosch University
39 PUBLICATIONS 50 CITATIONS 5 PUBLICATIONS 8 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Petrus Cornelius Bester on 06 January 2025.

The user has requested enhancement of the downloaded file.


RIDP
Military justice is an essential aspect of a nation’s defence system, rooted in a rich historical
08

Michelle Nel & Sonja Els (Eds.)


Gwenaël Guyon, Evert Kleynhans,
context that is intertwined with wider legal, political and social developments. Its
importance has increased in the 21st century due to the changing nature of war, the need
to protect civilians, the need to deal with misconduct by soldiers and to protect victims. It
is essential to understand the historical development of military justice in order to grasp its
complex legal and ethical dimensions, and avoid past mistakes.

Military justice serves to address misconduct within the armed forces, and to ensure

libri
discipline and compliance with ethical and international standards. Ongoing training in
military law is mandatory to prevent illegal actions and foster a culture of respect for legal
standards. One of the main objectives of military justice is also to protect civilians during
armed conflicts. International humanitarian law, including the Geneva Conventions, requires
the protection of non-combatants, and military justice systems help to ensure compliance
with these laws. Investigating and prosecuting violations, particularly those that endanger
civilians, helps to ensure accountability and maintain the legitimacy of military operations.
In the same way, advances in military technology, such as the use of drones and artificial
intelligence, pose new challenges for military justice. Legal frameworks must evolve to take
Gwenaël Guyon, Evert Kleynhans,

Military Justice: Contemporary, Historical and Comparative Perspectives


account of legal and ethical implications of these technologies. Additionally, warfare has
significantly transformed in recent years, with cyber warfare, private military companies
and counter-insurgency operations. Finally, contemporary military operations often involve
Michelle Nel & Sonja Els (Eds.)
coalitions of multiple countries, requiring harmonized approaches to military justice to
ensure consistency across different legal systems. Military Justice: Contemporary, Historical
The International Military Justice Forum (IMJF) provides a platform for global discussions on
and Comparative Perspectives
military justice, bringing together academics, practitioners, and military personnel. It fosters
comparative analysis of international military justice systems and explores their historical
and current evolution.
This volume brings together major contributions to the 2nd International Military Justice
Forum, which convened on 8 and 9 November 2023 in Stellenbosch, South Africa.

Gwenaël Guyon is Associate Professor in Legal History and Comparative Law at Saint-
Cyr Coëtquidan Military Academy, seconded from the University Paris Cité, and President
of the International Military Justice Forum.

Evert Kleynhans is Associate Professor in Military History at the Faculty of Military


Science of Stellenbosch University, and Honorary Researcher at the Centre for War and
Diplomacy at Lancaster University.

Michelle Nel is Associate Professor in Military Law at the Faculty of Military Science of
Stellenbosch University, and a part-time researcher at the Security Institute for Governance
and Leadership in Africa (SIGLA).

Sonja Els is Senior Lecturer in Mercantile Law and Military Law, and Chair of the School
for Human Resource Development at the Faculty of Military Science of Stellenbosch
University. Revue Internationale de Droit Pénal
RIDP
International Review of Penal Law
Revista internacional de Derecho Penal
[Link]
ISBN 978-90-466-1277-4 Международное обозрение уголовного права
刑事法律国际评论
libri 08

‫المجلة الدولية للقانون الجنائي‬


Revista Internacional de Direito Penal
Rivista internazionale di diritto penale
Internationale Revue für Strafrecht
9 789046 612774
MAKLU
MAKLU
THE LEGAL AND MORAL LANDSCAPE OF MILITARY ARTIFICIAL
INTELLIGENCE: FIXING THE ACCOUNTABILITY GAP BETWEEN
MAN AND MACHINE
Piet Bester * & Sonja Els*

Abstract

Artificial Intelligence (AI) increasingly creates an ethical and moral dilemma in traditional war-
fare. The irony is that new technology is introduced to make the battlefield more predictable, but
in fact, it has led to more sources of unpredictability caused by the risk of not fully understanding
the autonomous process of selection and engagement and of straining the link between intent and
outcome. This paper aims to determine the nature of the accountability gap between man and
machine powered by military AI, suggest how this gap can be closed within the legal and moral
landscape of AI in military operations, and share the findings with the academic community. Ar-
eas that will contribute to the development of a comprehensive framework to close the accounta-
bility gap are proposed. These areas include but is not limited to focus on the role of supplying
states and manufacturers, layered target identification, certification of autonomous weapons sys-
tems (AWS), preprogramming of AWS with moral codes and integrating Responsible Artificial
Intelligence (RAI) into military leadership development and training. The study is concluded, and
some recommendations are made.

1 Introduction

Modern warfare has been significantly shaped by advancement in technology, notably


machine learning, robotics, and artificial intelligence (AI).1 Autonomous weapons sys-
tems (AWS) such as AI-enabled drones can independently choose, follow and destroy
targets post launch. The development of these systems into fully developed lethal auton-
omous weapon systems (LAWS), raises significant ethical and legal concerns.2 This is

* Faculty of Military Science, Department of Industrial Psychology (Mil), University of Stellenbosch.


Email: pcbester@[Link].
* Faculty of Military Science, Department of Mercantile & Public Law (Mil), University of Stellenbosch.

1 Peter Layton, ‘The Artificial Intelligence Battlespace’ (RUSI, 9 March 2021) The Artificial Intelligence

Battlespace | Royal United Services Institute ([Link]) accessed 23 December 2022, Vojtech Šimák, Michal
Gregor, Marián Hruboš, Dušan Nemec and Jozef Hrbček, ‘Why Lethal Autonomous Weapon Systems are
Unacceptable’ (IEEE 15th International symposium on Applied Machine Intelligence and Information,
Herl’any, 26-28 July 2017), Peter Elands, Marlijn Heijnen and Peter Werkhoven, ‘Operationalization of
Meaningful Human Control for Military AI: A Way Forward’ Position Paper for the REAIM 2023 summit
February 2023
[Link]
U8UUEAHXAUAX0QFnoECBMQAQ&url=https%3A%2F%[Link]%2Fpublication%2F34
640573%2FJAc5RL%[Link]%25C2%25A0&usg=AOvVaw1wq-
UKlPzfPOfDgt4Muhgp&opi=89978449>accessed 4 November 2023.
2 Ludovic Righetti, Quang Cuong Pham, Radhamadhavan Madhavan and Raja Chatila, ‘Lethal Autono-

mous Weapon Systems ’ [2018] 25(1) IEEE Robotics & Automation Magazine 123-126, Mann Virdee, ‘AI and
national security: What are the challenges in Britains World: The Council of Geostrategy’s online

225
because AI is ‘blind’ as it identifies a pattern and acts by dehumanisation and objectifi-
cation which can lead to automation bias.3 This independence is known as autonomy.4
Autonomy enables faster decision-making than in the past but challenges the quest to
determine who is responsible and accountable for its actions.

This issue of accountability in the use of military AI systems has sparked international
debate, particularly concerning compliance with international humanitarian law (IHL)
and the legal and ethical considerations about accountability. 5 The consensus is that AI
should not possess the authority over human life and death,6 creating a complex chal-
lenge in attributing responsibility by AI-driven decisions.

The push for international discourse began with the 2010 Berlin Statement by the Inter-
national Committee for Robot Arms Control (ICARC), highlighting concerns over loss of
human control in warfare.7 This was followed by other events such as the ’United Na-
tions Convention on Prohibitions or Restrictions on the Use of Certain Conventional
Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate
Effects.’8 Some authors9 debate the issue of whether AWS or semi-AWS systems should
be used at all, this paper assumes that it will be used in one AI form or another. The aim
is to focus on the concern about accountability rather than the legality of the weapon
system itself.

magazine’ [2023] < [Link]


the-challenges/> accessed 12 November 2023.
3 Channel 4 News, ‘How the Russia Ukraine War is supercharging AI warfare’ (July. 2023)

<[Link] accessed 04 November 2023. ‘Automation bias refers to the tendency to


rely too heavily on automated systems without critically evaluating their outputs or recommendations’
as stated by Ravi Panwar, ‘A Qualitative Risk Evaluation Model for AI-Enabled Military Systems’ in Jan
Maarten Schraagen (ed) Responsible Use of AI in Military Systems (CRC Press, 2024) 62.
4 Channel 4 News, ‘How the Russia Ukraine War is supercharging AI warfare’ (July. 2023)

<[Link] accessed 04 November 2023.


5 There is a debate about responsibility trust and reliability, control and accountability, motivation and

dignity when AWS are used. See also: Peter-Dean Baker, ‘Ethical Issues of Using Killer and Drones in
Warfare’ Web Seminar (2022), Filippo S de Sio and Jeroen van der Hoven, ‘Meaningful Human Control
Over Autonomous Systems: A Philosophical Account’ [2018] 5 Frontiers in Robotics and AI 5, Robert
Sparrow, ‘Killer Robots’ [2007] 24(1) Journal of Applied Philosophy 62-77, Michael Skerker, Duncan Purves
and Ryan Jenkins, ‘Autonomous Weapon Systems and the Moral Equality of Combatants’ [2020] 3(6)
Ethics and Information Technology 197, Ilse Verdiesen, ‘Comprehensive Human Oversight over
Autonomous Weapon Systems’ (D Phil Thesis, Delft University of Technology 2024) 19.
6 Marc Cannellos and Rachel Haga, ‘Lost in Translation: Building a Common Language for Regulating

Autonomous Weapons’ [2016] 53(3) IEEE Technology and Society Magazine 53, ICRC UK and Ireland, ‘What
you need to know about artificial intelligence in armed conflict’ (ICRC, 6 October 2023)
<[Link]
accessed 3 November 2023.
7 Cannellos and Haga (n 6) 51.

8 Fabio Massacci and Silvia Vidor, ‘Builing Principles for Lawful Cyber Lethal Autonomous Weapons’

[2022] IEEE Security and Privacy 101.


9 De Sio and Van der Hoven (n 5) 1-14. See also Skerker, Purves and Jenkins (n 5) 197-209.

226
Key issues identified in this debate include defining autonomy, determining the neces-
sary level of control for lawful AWS use, establishing accountability frameworks, and
certifying permissible AWS.10 Additionally, maintaining human oversight in AI deploy-
ment presents a challenge, such as addressing the responsibility gap, aligning AI behav-
ior with stakeholder values, and ensuring human performance is consistent with ethical
standards.11 Cannellos and Haga12 assert that there are many benefits to international
regulation of AWS, such as using similar standards and ensuring adherence to interna-
tional law.

Despite extensive discussions on responsibility there is no international consensus on


defining meaningful human control or addressing the accountability gap in AI develop-
ment and deployment. This paper aims to explore the nature of the accountability gap in
military AI and proposes solutions to bridge this gap within the legal and moral frame-
works governing AI in military operations.

This paper will begin by clarifying the core concepts, then discuss the implications of
military AI in autonomous warfare, followed by an exploration of the role of humans in
relation to military AI systems. It will also review the existing moral and legal frame-
works guiding ethical use of AI-enabled weapons. The final section will offer recommen-
dations to adjust these frameworks to address the accountability gap. This research seeks
to contribute to the academic and military discourse by focusing on the under-researched
area of accountability in military AI.

2 Concept clarification

This discussion is situated within the context of military ethics and IHL. Military ethics
refers to a systematic process of weighing and explicating morality, which is made up of
human values and norms regarding discipline, integrity, and honesty, which guide the
military community in making the right choices in their work context.

2.1 Authority and responsibility

Authority describes which functions an agent (human or weapon) is asked to perform. 13


In contrast, responsibility describes which outcomes an agent will be accountable for in
an organizational, regulatory or legal sense. The responsibility for the outcome of a func-
tion must be considered relative to the authority to perform it.14 An illustrative example
of these two concepts refers to the modern commercial airline cockpits where the human
flight crew maintain responsibility for the safety of the flight although the autopilot and
auto flight systems exercise authority over important actions within the aircraft control

10 Cannellos (n 7) 52.
11 Pieter Elands, Marlijn Heijnen and Peter Werkhoven. ‘Operationalization of Meaningful Human Con-
trol for Military AI’ [2023] Position Paper for the REAIM 2023 Summit 1-9.
12 cf Cannellos et al. (n 7) 5252.

13 ibid 55-56.

14 ibid 56.

227
and trajectory management functions.15 The same principle can thus be applied to AWS
in general and, more specifically, LAWS.

2.2 Moral responsibility and control

Verweij,16 a respected scholar in military ethics, states that a person demonstrates moral
responsibility if they recognize the ‘other’ and the appeal they make to them. Moral re-
sponsibility is obedience towards a bureaucratic authority in line with local circum-
stances.17 More specifically, within the context of AWS, moral responsibility is based on
whether and under which conditions humans are in control and, therefore, responsible
for their everyday actions.18

Control can be understood when referring to the work of the philosopher Dennett, who
states that A controls B only if the relation between A and B is such that A can drive B
into whichever of B’s normal range of states that A wants (intends) B to be in. 19 Two
broad views on control and moral responsibility can be postulated philosophically.20
Firstly, the incompatibilists deny the compatibility of causal explanations of human ac-
tions and human moral responsibility in that causality and human moral responsibility
cannot be reconciled. Secondly, the compatibilists believe that humans may be morally
responsible for some of their actions even if they do not possess any special metaphysical
power to escape the causal influences of their behavior. Control and accountability are
closely tied; if one has control, one has accountability.21

2.3 Accountability

Where responsibility is task-oriented, accountability refers to what happens after a situ-


ation has occurred to determine ownership over the results. In a recent study Vediesen
defines accountability as:

…a form of backward-looking responsibility that refers to the ability and willing-


ness of actors to provide information and explanations about their actions and
defines mechanisms for corporate and public governance to hold agents and or-
ganisations [sic] accountable in a forum…22

15 ibid.
16 Désirée Verwij, ‘Introduction: Ethics and Military Practice’ in Désirée Verwij, Peter Olsthoorn and Eva
van Baarle (eds), Ethics and Military Practice (Brill Nijhoff, 2022) 1-14.
17 Eric-Hans Kramer, Herman Kuipers and Miriam de Graaff, ‘An Organisational Perspective on Military

Ethics’ in Verwij, Olsthoorn and Baarle (n 16) 83-99.


18 De Sio and van der Hoven (n 5) 1-14.

19 Peter-Dean Baker, ‘Ethical Issues of Using Killer Robots and Drones in Warfare’ Online Seminar (31

August 2022).
20 De Sio and van der Hoven (n 5) 4.

21 Baker (n 19).

22 Ilse Verdiesen, ‘Comprehensive Human Oversight over Autonomous Weapon Systems’ (D Phil Thesis,

Delft University of Technology 2024) 19.

228
Sometimes, one does not have control, but can still be accountable because one should
have control.23 The possibility of human intervention (also read control) is crucial for ac-
countability when it comes to the actions of an autonomous system. 24 If a human can
intervene and/or influence the autonomous system, it would thus not be viewed as truly
autonomous. However, when a system has no human overrides, it creates what Kajander
et al.25 call the accountability gap, which is the focus of this paper. There is, thus, a direct
relationship between decision-making and accountability.

2.4 Moral decision-making

Any decision made on life or death that a soldier may be confronted with is moral. Any
moral decision is guided by what one should do based on norms and subsequently has
some form of moral consequence.26 Hence, military AI is essential in moral or ethical
decision-making in military practice.27

Baker states that based on the arguments by De Sio and van den Hoven, 28 there are
mainly three kinds of objections to providing full autonomy to a machine: (1) robots (AI
systems) are not and will likely not be able to make the practical and moral distinctions
required by the laws of armed conflict, for example, to distinguish between combatants
and non-combatants, and apply proportionality in the use of force. For this reason, dele-
gating military tasks to robots may contaminate military operations; (2) there is some-
thing fundamentally wrong with leaving the decisions of life and death of people to a
machine, and it is fundamentally mala in se (evil in itself); (3) in the case of war crimes or
fatal use of weapons, it might be impossible to hold anyone military and morally respon-
sible,29 and ultimately accountable.

The above thus leads to the question of who the responsible parties would be. One can
identify several role players in terms of responsibility, namely the manufacturer, the pro-
grammers, the seller of the AWS, the operator (in limited cases), and the user when, for
example, negligence leads to a failure, or even the machine itself.30 Similarly, all weapon
systems are designed and activated by human beings who can be held accountable if

23 Baker (n 19).
24 Aleksi Kajander, Agnes Kasper and Evhen Tsybulenko, ‘Making the Cyber Mercenary-Autonomous
Weapon Systems and Common Article 1 of the Geneva Conventions ’ in Tat’ána Jančárková, Lauri
Lindström, Massimiliano Signoretti and Gábor Visky (eds), 20/20 Vision: The Next Decade-12th International
Conflict on Cyber Conflict (Talin, 2022) 85.
25 ibid.

26 Christine Boshuijzen-Van Burken, ‘Ethics and Technology’ in Verwij, Olsthoorn and van Baarle (n 16)

71-72.
27 ibid 70.

28 De Sio and van der Hoven (n 18) 12.

29 Baker (n 19).

30 Kajander (n 24) 84-85.

229
their ‘creations’ do not comply with the law of armed conflict (LOAC). 31 Current princi-
ples of International Criminal Law (ICL) provide for various classes of participation in
crimes (such as co-perpetration and aiding and abetting). Acquaviva32 concludes that it
will be up to the judges to decide on the facts of each case, how the individual partici-
pated in an international crime and what degree of culpability should be assigned to the
conduct. Criminal blame can theoretically be ascribed to any of the abovementioned in-
dividuals. Furthermore, in such a combined responsibility concept, the degree of respon-
sibility will inevitably impact the sentencing phase, where an assessment will be made
to determine how much punishment the manufacturer, programmer, seller or operator
‘deserve’ for their contribution.33

2.5 Autonomous warfare

Autonomous weapons systems have received the most attention when it comes to the
use of military AI.34 First and foremost, it is vital to distinguish autonomous systems from
automated systems. An automated system operates by clear, repeatable rules based on
unambiguous sensed data. In contrast, an autonomous system takes in data about the
unstructured world around them, processes the data to generate information, generates
alternatives and makes decisions in the face of uncertainty.35 Autonomous systems range
from fully automated to fully autonomous. Semi-autonomous weapons have been used
in warfare for many years.36 These systems are generally limited in their mobility as some
may be static systems which are either unmoving or move only in preprogrammed areas
or are endowed in transport systems piloted by humans (such as an autopilot on an aer-
oplane) which can hardly operate without any human operator. At the opposite end,
there are weapons systems where man is no longer used, such as uncrewed ground ve-
hicles (UGRVs) and uncrewed aerial vehicles (UAVs). 37 From an overview of the litera-
ture,38 AWS, whether lethal or not, can be defined as weapon systems able to make target
selection, including research, detection, identification, tracking and acting, which in-
clude but are not limited to target attack (use of force, neutralize, damage destruct). Thus,

31 Guido Acquaviva. ‘Autonomous Weapon Systems Controlled by Artificial Intelligence: A Conceptual


Roadmap for International Criminal Responsibility’ [2022] 60(1) The Military Law and the Law of War
Review 98.
32 ibid 104.

33 ibid 107.

34 ICRC UK & Ireland. ‘What You Need to Know About Artificial Intelligence in Armed Conflict’ (2023)

International Committee of the Red Cross <[Link]


about-artificial-intelligence-armed-conflict> accessed 12 September 2023.
35 Mary L Cummings, ‘Lethal Autonomous Weapons: Meaningful Human Control or Meaningful Certi-

fication? [2019] IEEE Technology and Society Magazine 20.


36 Kajander (n 24) 84.

37 Peter Layton, ‘The Artificial Intelligence Battlespace’ (RUSI, 9 March 2021) < The Artificial Intelligence

Battlespace | Royal United Services Institute ([Link]) >accessed 23 December 2022


38 Altman et al. cited in De Sio and van der Hoven (n 18) 12. Cf Cannellos et al. (n 7) 50, Daniele Amoroso.

‘Jus In Bello and Jus Ad Bellum Arguments Against Autonomy in Weapon Systems: A Re-Appraisal’
[2017] 43 Quil, Zoom 5-31. 9, cf ICRC UK & Ireland (n 34).

230
when a system is fully autonomous, all decisions are delegated to the system itself. The
link to military AI is therefore clear.

There are some concerns and criticism against AWS, although some argue that using
decision support systems can help human decision-making in a way that facilitates com-
pliance with IHL and minimizes the risk for civilians.39 With AWS, it is easier to dehu-
manize the enemy, viewing them as targets or collateral damage stripped from human
characteristics such as love, humor, having a family, feelings and others, giving up on
the notion of a human enemy. The concern of autonomous weapons relates not only to
the humans involved but also to the ‘machine’ or system itself because machines lack
specific human characteristics.40 Machines do not possess virtues such as sympathy or
mercy, lack any inhibitions against violence and self-sacrifice, make decisions without
emotion, and have no sense of mortality and conscience.41 Thus, it is difficult to grasp
how humans can act responsibly if they allow machines to bear on ending human life.

Furthermore, the unpredictability of new technology is a valid concern. Hence, the exist-
ing framework of laws was not created with autonomous systems in mind, and its appli-
cation to autonomous systems, especially in the military domain, leads to legal uncer-
tainties with the use of military AI.42 IHL places responsibility and accountability on hu-
mans, but increasing autonomy complicates the identification of the legally responsible
person.43

2.6 Artificial intelligence

AI involves using computer systems to execute tasks that require human cognition plan-
ning or reasoning, such as directly triggering a strike against a person or a vehicle.44 Al-
gorithms, a set of instructions or rules that a computer or machine must use to respond
to a question or solve a problem, are the foundation of an AI system.

Furthermore, machine learning is an AI system that creates its instructions based on the
data it is ‘trained’ on. These instructions are then used to generate a solution to a partic-
ular task. In essence, the software writes itself to improve itself based on the inputs from
the environment in which it operates. Machine learning is more than the so-called ‘rule-
based algorithms’, as it does not always respond in the same way to the same input and
is thus unpredictable. The ICRC UK and Ireland 45 highlight the challenge of machine

39 cf ICRC UK & Ireland (n 34).


40 Vojtech Šimák, Michal Gregor, Marián Hruboš, Dušan Nemec and Jozef Hrbček, ‘Why Lethal
Autonomous Weapon Systems are Unacceptable’ (IEEE 15th International symposium on Applied
Machine Intelligence and Information, Herl’any, 26-28 July 2017), Philip Chmielewski, ‘Ethical Autono-
mous Weapons? Practical Required Functions’ [2018] IEEE Technology and Society Magazine 52.
41 Philip Chmielewski, ‘Ethical Autonomous Weapons? Practical Required Functions’ [2018] IEEE Tech-

nology and Society Magazine 53.


42 Kajander (n 24) 84-85.

43 Massacci and Vidor (n 8) 102.

44 ICRC UK & Ireland (n 34).

45 ibid.

231
learning systems because even if the inputs are known, it can be challenging to explain
retrospectively why the system produces a particular input. This so-called ‘black box’
system thus makes it difficult to pinpoint responsibility and accountability.

Layton46 quotes Robert Work, who declares: ‘The reason to pursue AI is to pursue
autonomy. AI weapon systems might apply lessons learned in the battlespace to de-
velop their criteria against which to recognize a target, or they may observe and
record the pattern of life in the target area and subsequently apply their observations
and pre-learned lessons to decide what to attack.’ Therefore, some militaries are us-
ing AI to improve the speed and accuracy of their battlefield decision-making,
through automating command and control systems and predictive operational plan-
ning tools that address intelligence, surveillance, and reconnaissance. It is expected
that the first country to understand AI adequately enough to change the human-
centered force structures and embrace AI warfighting may gain a considerable ad-
vantage.

Various authors identified certain shortcomings with AI. The most noteworthy are: hu-
mans can produce better results under certain circumstances, machines find it hard to
handle minor context changes, AI systems can be easily fooled, it is delicate in working
in contexts it was only trained for; hence the difficulty to apply cross contextual
knowledge across different contexts, limited capacity to generalize from little infor-
mation; and humans make better judgments in environments of uncertainty. 47 Further-
more, existing computer vision systems relying on machine learning are not better than
humans to correctly distinguish between criteria in high uncertainty target identification
scenarios.48

Despite the abovementioned challenges, the advantages of AI and AWS are exponential.
Mirza et al. 49 summarise it in terms of its use: Economically AWS is more cost-effective
than human-driven weapon systems for example when comparing the operational cost
of a Predator UAV, approximately $100 per hour in 2016, to that of a standard human-
crewed tactical aircraft with a minimum cost of $1500 per hour in 2016; Tactically AWS
does not risk a human pilot's life and excludes the risk of becoming a prisoner of war;
Technically the radar signature of UAVs is much smaller than human-crewed aircraft
and satellites. This enables them to fly closer to the earth’s surface and remain over the
area for longer; Militarily AWS has transformed the warfare landscape mainly due to its
ability to strike, suppress and destroy. As a result, it may enhance mission effectiveness;

46 Layton (n 37).
47 ibid 13-15.
48 Cummings (n 35) 25.

49 Muhammad Nadeem Mirza, Irfan Hasnain Qaisrani, Lubna Abid Ali and Ahmad Ali Naqvi.

‘Unmanned Aerial Vehicles: A Revolution in the Making’ [2016] 31(2) Research Journal of South Asian
Studies 249-250.

232
and lastly, in the civilian arena, vast possibilities exist, including coastguard search and
rescue, border surveillance, and firefighting.

When considering the notion that AI seemingly accelerates decision-making the short-
comings of AI referred to in the preceding paragraph resonate with the research problem
this paper addresses, namely, who will be held accountable for wrong decision-making
that leads to the loss of innocent lives. This links closely to the role of man in AWS as
identified at the Convention of Certain Conventional Weapons.50

3 The Role of Man in AWS

AI-enabled systems have replaced many traditional human tasks, offering increased ef-
fectiveness, efficiency and speed. Layton51 describes three modes of autonomy in AWS:
man-in-the-loop, man-on-the-loop, and man-out-of-the-loop.

Man-in-the-loop systems involve human control over specific functions, such as target en-
gagement and attacking, ensuring non-autonomous operation for example, through a
remote-control device.52 Despite continuous human input, these systems are still viewed
as part of autonomous systems as the system has some form of autonomy or uses AI.
Human interface is crucial in various mission phases with ‘robots’ acting on human com-
mands. Examples include the U.S. Predator MQ-1, operated remotely from afar, which
can lead to emotional and moral detachment consequences.53 Thus, humans retain con-
trol over specific functions, and determining accountability would not be problematic.
The human soldier will decide whether the use of force is justifiable according to stand-
ard principles of LOAC and may be prosecuted for war crimes if the conduct does not
meet the required threshold.

Man-on-the-loop systems grant AI control over operations, with possible human interven-
tion.54 These systems, which may notify humans before action, require continuous mon-
itoring to offset AI limitations. Rapid AI response times and possible automation bias
pose challenges.55 Human supervision aims to ensure lawful operations, with accounta-
bility determined case by case and guided by the principles of LOAC. It would thus not
be problematic to determine accountability.

Man-out-of-the-loop systems operate fully autonomously, with AI making decisions with-


out human intervention. This mode raises significant accountability issues as humans
cannot directly control the machine’s actions. Command responsibility may address

50 Massacci and Vidor (n 8) 103.


51 Layton (n 37)13 and 24.
52 Jack McDonald ‘Autonomous Agents and Command Responsibility’ In James Gow, Ernst Dijxhoorn,

Rachel Kerr and Guglielmo Verdirame (eds), Routledge Handbook of War and Technology (Routledge, 2019)
32.
53 Noel Sharkey, ‘Saying ”No!“ to Lethal Autonomous Targeting’ [2010] 9(4) Journal of Military Ethics 369–

370.
54 McDonald (n 52) 32.

55 Massacci and Vidor (n 8) 103.

233
these gaps by holding humans accountable for deploying AWS despite foreseeable
risks.56 Legal frameworks such as dolus eventualis and willful blindness are proposed to
ensure accountability.57 Suggested legal changes include amending international statutes
to address AWS-specific offences, focusing on delegating the decision to kill. Therefore,
the next part briefly refers to the latter.

4 Delegating the decision to kill

Righetti et al.58 argue that delegating the decision to kill to an algorithm undermines hu-
man dignity and ethical norms. They assert that allowing algorithms to make life-and-
death decisions necessitates setting strict limitations on the capabilities grounded in eth-
ical considerations. The primary goal is to balance morality and legality in warfare, en-
suring that AWS comply with international law while maintaining superior ethical per-
formance compared to human soldiers. Arkin59 also supports this view.

Furthermore, Righetti et al.60 stress the need for ethical norms for autonomous technolo-
gies, including LAWS. They propose developing a moral framework to guide the ethical
use of lethal weapons. Similarly, the ICRC and Ireland 61 urge governments to adopt in-
ternational rules to prohibit or restrict certain AWS, including those controlled by mili-
tary AI. Ascott62 suggests a rule-based moral framework as the best approach, while Ca-
nellos and Haga63 warn against the illusion of human control and ambiguous responsi-
bility, complicating accountability.

A robust legal and moral framework is necessary to guide LAWS use. Before suggesting
such a framework, one must examine the existing framework that guides the use of lethal
weapons.

5 The existing moral and legal framework guiding the ethical use of weapons

A military AI agent like an AWS cannot understand the value of human life because it
lacks the experience of having personal projects or sensing mortality. 64 A duty-holder
cannot transfer duties to an entity incapable of bearing those duties and privileges. Sim-
ilarly, a human combatant cannot transfer his privileges of targeting enemy combatants

56 Schulzke and Asaro, quoted by McDonald (n 51) 150.


57 Acquaviva (n 31) 97.
58 Righetti et al. (n 2) 125.

59 Ronald C Arkin. ‘The Case for Ethical Autonomy in Unmanned Systems’ [2010] 9(4) Journal of Military

Ethics 332–341. doi: [Link] 339.


60 Righetti et al. (n 2) 125.

61 ICRC UK & Ireland (n 34).

62 Tom Ascott, ‘Should Robots Kill? Towards a Moral Framework for Lethal Autonomous Weapon Sys-

tems’ ( RUSI Newsbrief, 22 August 2019) < [Link]


brief/should-robots-kill-towards-moral-framework-lethal-autonomous-weapons-systems> accessed 10
September 2023.
63 Cannellos and Haga (n 7) 57.

64 Michael Skerker, Duncan Purves and Ryan Jenkins, ‘Autonomous Weapon Systems and the Moral

Equality of Combatants’ [2020] 3(6) Ethics and Information Technology 197-209.

234
to a military AI system. Therefore, any human duty-holder who deploys AWS is guilty
of breaching the agreement between human combatants and, in that way, disrespects the
targeted combatants.65

Within the traditional context, the process of target selection and engagement is substan-
tially within the hands of humans.66 It is human endeavors to decide on the status of a
potential target as they go through a process of initial target identification, utilizing ma-
terials that decision-makers have recourse to and a process of questioning assumptions.
This process is changing with the introduction of military AI as battlefield target identi-
fication and selection decisions are increasingly being delegated to machine sensors and
algorithms.67 The importance of human review and override with specific functions is
still required.68

Van Benthem69 points out that, for now, there is no consensus around the applicability
of international law on autonomous systems. She states that agreeing on applying the
law is not the issue, but how the law applies is a very complex exercise. It is, therefore,
necessary to look first into the standards of morality and legality and then interrogate
international law in general and, more specifically, the LOAC.

The use of lethal autonomous systems requires some form of moral framework to guide
their ethical use.70 Ethics is integral to using military AI, 71 and from an international law
perspective, a major legal concern for LAWS is whether it can be compatible with IHL,
particularly with the core principles of distinction, proportionality and precaution.

5.1 Geneva Convention as the cornerstone of IHL

The Geneva Convention, particularly Common Article 1, underpins IHL. It mandates


parties to respect and ensure respect for the treaty in all circumstances, obliging states to
ensure compliance by their organs and those under their control. 72

McDonald73 highlights the critical role of moral and legal decisions in military ethics,
divided into jus ad bellum (resort to war) and jus in bello (conduct in war). He posits that
the choice is analyzed regarding state decision-making (choice to use armed force or en-
gage in war) or individual decision-making (whether the soldier’s conduct is morally

65 ibid 201.
66 Tsvetelina J Van Benthem, 2022. Exploring Changing Battlefields: Autonomous Weapons, Unintended
Engagements and the Law of Armed Conflict (NATO CCDCOE Publications, 2022) 198.
67 ibid.

68 Richard Loe, Christopher Maracchion and Andrew Drozd, ‘Semi-Autonomous Management of Multi-

ple Ad-Hoc Teams of UAVs’ [2015] IEEE Symposium on Computational Intelligence for Security and Defense
Applications (CISDA) 1.
69 Van Benthem (n 66) 201.

70 Ascott (n 62).

71 Layton (n 37) 7.

72 Kajander et al. (n 24) 80-82.

73 McDonald (n 52) 145.

235
justifiable). The LOAC is guided by four principles: military necessity (to use force oth-
erwise prohibited), distinction (between combatants, non-combatants and objects), pro-
portionality (to minimize collateral damage), and humanity (to prevent unnecessary suf-
fering).74 These principles aim to minimize civilian harm and ensure lawful conduct in
armed conflict.

Furthermore, Article 36 of Additional Protocol I of the 1949 Geneva Convention require


states to review new weapons for compliance with IHL. 75 The Martens Clause, also
known as the dictate of public conscience, states that the fact that there is no law prohib-
iting a weapon does not mean that its use is permitted. The Clause functions as a residual
principle and intends to cover all instances not yet regulated by IHL. The Clause is reju-
venated in Article 1(2) of Additional Protocol I, which specifies that, in the event of cases
not covered by any international agreements, civilians and combatants remain under the
protection of international law based on principles of humanity and public conscience.76

5.2 Aviation related legislation

Briefly considering aviation, LAWS, such as drones, may breach international air law.
The 1944 Chicago Convention and subsequent treaties, like the Beijing Protocol, regulate
drone safety and security, criminalizing unauthorized control and the use of aircraft for
harm. LOAC usually governs military aircraft during conflicts. 77

5.3 Ethical and legal concerns with AWS

AWS raises significant concerns regarding adherence to the principles of distinction and
proportionality, particularly regarding civilian harm.78 Challenges include identifying
combatants and distinguishing military and civilian objects.79 This can easily lead to al-
legations of conducting war crimes. Therefore, assessing war crimes and command re-
sponsibility is crucial in this context.

5.4 War crimes and command responsibility

Human combatants may make errors and behave unethically, but eventually, they can
be held accountable. However, AWS cannot be held accountable as robots do only what
they are programmed to do and cannot be punished. Hence, who is responsible along
the causal chain – the manufacturer, the designer, the defense force, the general in charge

74 Terry D Gill and Dieter Fleck (eds), ‘The Handbook of the International Law of Military Operations’
(2nd edn, Oxford University Press, 2015) 36.
75 Righetti et al. (n 2) 125.

76 Skerker (n 64) 82.

77 Robin Geib and Nils Melzer (eds), The Oxford Handbook of the International Law of Global Security (Oxford

University Press, 2021) 626.


78 Sharkey (n 53) 369–370.

79 Massacci and Vidor (n 8) 105.

236
of the mission or the operator? According to IHL, these AWS cannot be used if a human
cannot be held accountable.80

Facing constantly evolving technology and science, international courts often use meth-
ods of dynamic interpretation of international treaties and principles of international law
to enhance their full effect.81 The same is apt for human rights treaty bodies. Bannelier82
argues that human rights treaties are ‘living instruments’ that constantly undergo dy-
namic interpretation due to changing environments.

The notion of prosecuting soldiers for war crimes marked a shift from ‘concern to con-
demnation’ in the modern world.83 The military must reflect society. Thus, soldiers are
judged by morality and legality. Moreover, the permanent International Criminal Court
enhances the notion of state responsibility but, more importantly, individual criminal
responsibility.

War crimes are closely linked to command responsibility. Traditionally, from the per-
spective of customary international law, command responsibility denotes that the mili-
tary commander may be held criminally responsible for war crimes committed by his (or
her) subordinates if the commander knew, or should have known, that the subordinate
was committing or is about to commit such crimes and, notwithstanding, failed to pre-
vent the commission thereof and/or failed to punish the perpetrators.84 ‘Knowledge’ im-
plies actual or constructive knowledge. In traditional warfare, the essence of command
responsibility is accountability for all actions and results. Guiora 85 warns that applying a
similar model to autonomous warfare is essential. However, command responsibility
implies the availability of reasonable information at the time of the commander’s deci-
sion.

As mentioned earlier, to accomplish the aim of the primary principles of LOAC, all fea-
sible precautions must be taken during the planning phase and during an attack. 86 This
also includes the decision to cancel or suspend an attack when the collateral effects of the

80 James Gow, Ernst Dijxhoorn, Rachel Kerr and Guglielmo Verdirame (eds), Routledge Handbook of War
and Technology (Routledge, 2019) 33.
81 Kai Ambos, ‘International Criminal responsibility in Cyberspace’ in Nicholas Tsagourias and Russel

Buchan (eds) Research Handbook on International Law and Cyberspace (Elgar, 2021) 172.
82 Karine Bannelier, ‘Is the principles of distinction still relevant in cyberwarfare? From doctrinal dis-

course to State practices’ in Nicholas Tsagourias and Russel Buchan (eds) Research Handbook on Interna-
tional Law and Cyberspace (Elgar, 2021) 447.
83 cf Gow et al. (n 80) 16.

84 Daniele Amoroso ‘Jus In Bello and Jus Ad Bellum Arguments Against Autonomy in Weapon Systems:

A Re-Appraisal in Quil, Zoom’ [2017] 43 5-31. 19.


85 Amon S Guiora ‘Accountability and Decision Making in Autonomous Warfare: Who is Responsible? In

Utah Law Review’ [2017] 2 418.


86 cf Layton (n 37). 7; Terry D Gill, ‘International Humanitarian Law Applied to Cyber Warfare: Precau-

tions, Proportionality and the Notion of “Attack” Under the Humanitarian Law of Armed Conflict’ in
Nicholas Tsagourias and Russel Buchan (eds) Research Handbook on International Law and Cyberspace (Elgar,
2021) 457.

237
attack are likely to be excessive in relation to the military advantage anticipated. Gill 87
consider this as ‘proportionality in a somewhat different context’, namely, to choose the
time, method and means of attack least likely to cause excessive collateral damage or
injury. The standard of the onus is that of a ‘reasonable commander/combatant’ acting
bona fide on the information reasonably available at the time the attack is planned or
conducted.88 It requires a continuous assessment throughout the attack. Article 57(1) of
Additional Protocol I require constant care to spare the civilian population and objects.
Gow et al.89 reason that this obligation will bind everyone involved in the mission's plan-
ning phase.

Furthermore, Article 57(2)(a)(i) requires that attackers do everything feasible to verify


that it is not prohibited to attack the intended targets. When the performance of an AWS
cannot be determined with certainty in advance, the human operator/soldier/com-
mander needs to consider this, and appropriate limitations must be developed to ensure
that the discrimination and precaution rules will be complied with.90 Hence, one of the
major concerns with AWS in general and more specifically LAWS, is this apparent am-
biguity of who should be held accountable for the deaths caused by these systems. This
leads to the apparent ‘accountability gap’.

Consequently, some states and militaries have addressed the accountability gap in legis-
lation and policies. For example, the United Kingdom (UK) policy dictates that for all
weapon systems, ‘legal responsibility for any military activity remains with the last per-
son to issue the command authorising a specific activity’. 91 The first U.S. policy on au-
tonomy in weapon systems, Directive Number 3000.09, stipulates that ‘Autonomous and
semi-autonomous weapon systems shall be designed to allow commanders and opera-
tors to exercise appropriate levels of human judgment over the use of force’.92 Massacci
and Vidor, 93 however, warn about the level of human control and indicate that within
the Boeing MAX incident, man did not have the power to interfere in the software sys-
tem, although he was aware that something was wrong. Following the debate on ‘mean-
ingful human control’94 over AWS at the 2014 UN Convention on Certain Conventional
Weapons, two competing ideas emanate, namely those supporting the claim from Spar-
row that robots cannot be held responsible for their actions on the one hand and follow-

87 Terry D Gill, ‘International Humanitarian Law Applied to Cyber Warfare: Precautions, Proportionality
and the Notion of “Attack” Under the Humanitarian Law of Armed Conflict’ in Nicholas Tsagourias and
Russel Buchan (eds) Research Handbook on International Law and Cyberspace (Elgar, 2021) 464.
88 ibid.

89 cf Gow et al. (n 80) 34.

90 ibid, 40.

91 ibid 42.

92 ibid 142.

93 cf Massacci and Vidor (n 8) 105.

94 cf Cummings (n 34) 21-22 states that meaningful human control means that a human has to monitor a

weapon until impact and may perhaps have the ability to remotely abort the mission. It is also about the
role allocation between humans and autonomous weapon systems.

238
ers of Schulzke arguing that command responsibility sufficiently addresses responsibil-
ity for non-human agents.95 McDonald96 concluded that there is too much focus upon
‘bottom-up’ compliance seeking adherence to LOAC through individual ground-level
decisions. According to him,97 existing military structures already ensure ‘top-down’
compliance, and he argues that the combination of these two concepts is sufficient to
adhere to LOAC principles of responsibility. McDonald98 acknowledges the LAWS’ chal-
lenge to meaningful ‘bottom-up’ human control, but he firmly believes that professional
militaries will not employ LAWS in such a manner to violate LOAC.

Irrespective of the two conflicting streams of thought, all agree that, following the com-
mission of war crimes, it is crucial that humans will remain responsible for these viola-
tions. The crux of command responsibility is the notion that lethal decisions can be at-
tributed to human beings.99 Considering the complex nature of modern warfare, from a
practical perspective, it might be difficult to pinpoint the exact individual that must face
responsibility. McDonald provides the example of current (acceptable) aerial warfare of
a fast jet pilot being instructed on a target by an air controller on the ground and reasons
that:

… unless the pilot is in possession of information that the use of force would be
unlawful, there is the expectation that the pilot would use force against the target
described to him by the air controller. In this instance, the very nature of military
operations reduces the pilot’s autonomy.100

Thus the practice of warfare implies that decision-making is often distributed between
participants.101

Subsequently, following the ideas proposed by the Convention on Certain Conventional


Weapons, the Group of Governmental Experts adopted eleven guiding principles on
new technologies related to LAWS in 2019. The fourth principle states that:

…accountability for developing, deploying and using any emerging weapons


system must be ensured in accordance with applicable international law, includ-
ing through the operation of such systems within a responsible chain of human
command and control.102

Amoroso declares that command responsibility is not the solution to the accountability
conundrum but proposes a ‘ … shift in the accountability focus from the deployment to

95 cf Gow et al. (n 80) 142.


96 cf McDonald (n 52) 145.
97 ibid 143.

98 ibid.

99 ibid 144.

100 ibid.

101 ibid.

102 cf Acquaviva (n 31) 108.

239
the development/procurement phase …’103 Hence, Amorosa104 suggests that war crime
responsibility should primarily lie with military procurement managers, weapon devel-
opers and legal advisors. The authors disagree with this suggestion as it will result in an
absurdly long causal chain of responsibility.

The following section suggests some considerations for developing a robust moral and
legal framework to bridge the accountability gap related to AWS.

6 Towards a moral and legal framework to close the accountability gap

To address the accountability gap in the use of AWS and LAWS, it is crucial to examine
the principles of morality and legality in their application. Key issues include the ability
of machines to distinguish civilians from combatants, ensure compliance with IHL, and
attribute criminal responsibility in case of failure.

The principle of distinction, a cornerstone of the LOAC raises questions about whether
machines can accurately identify legitimate targets.105 Predictability concerns focus on
the machine’s behavior in unpredictable scenarios. Accountability questions revolve
around tracing failures, attributing responses, and accountability for IHL violations. Fur-
thermore, willful human action is a requirement for the criminal liability of individuals
in the event of all violations of IHL, and Article 30 of the Rome Statute addresses this
issue related to ‘intent and knowledge.’106 Subsequently, meaningful human control of
AI systems is required to comply with existing normative frameworks to justify violence
in war.107 Military commanders’ decision-making is always subject to disciplinary scru-
tiny due to the direct relationship between decision-making and accountability. The le-
gitimacy of a military action demands that accountability be integral to its decision-mak-
ing. Van Benthem108 refers to the term intent, which Boothy states is a precondition for
violating the fundamental rules of the LOAC, such as prohibiting the attack of civilians.
This view complicates issues related to the amount or quality of human control necessary
for the lawful use of LAWS, which is AI-enabled.109 Subsequently, these issues should be
considered, and IHL should be re-assessed when developing a moral and legal frame-
work to close the accountability gap, where more role players are involved than just sol-
diers on the battlefield.

First and foremost, when considering AI systems, one must address the role of supplying
states and manufacturers. This will require extending the Geneva Convention’s Common
Article 1 to ensure manufacturers design AWS that cannot violate IHL and incorporate
mechanisms for remote shutdown. Hence, in November 2017, the state parties to the UN

103 cf Amorosa (n 84) 20.


104 ibid.
105 cf Righetti et al. (n 2) 125; cf Gow et al. (n 79) 34 .

106 Rebecca Crootof, ‘War Torts: Accountability for Autonomous Weapon Systems in University of Penn-

sylvania Law Review’ [2016] 146(6) 1347–1402. 1375.


107 cf McDonald (n 51) 33; cf Righetti et al. (n 2) 124; cf Cummings (n 35) 20.

108 cf Van Benthem (n 66) 198.

109 cf Cannellos and Haga (n 7) 52.

240
Convention on Conventional Weapons established an open-ended Group of Govern-
mental Experts to explore recommendations on addressing LAWS in a legally binding
international instrument.110 The outcome of this endeavor may provide the answer to the
accountability gap predicament. The extension of the Geneva Convention’s Common Ar-
ticle 1 addresses the issue that responsibility and accountability will begin during the
manufacturing of the system which should, for example, build in an overriding ‘back-
door’ to prevent the unethical use or misuse of systems. Similarly, the Australian Depart-
ment of Industry, Science and Energy111 emphasizes the identifiability and accountability
of individuals responsible for the different phases of the AI life cycle of AI systems and
that human oversight should be enabled.

Another option is to look at layered target identification.112 The implementation of dual lay-
ered target identification, where humans make strategic decisions and AWS execute
them, ensuring clear lines of accountability. Such a system can, for example, create a log
to store all orders from human operators.113

Thirdly, attention should be given to the certification of AWS. AI combined with machine
learning enables the weapon system to learn and modify its behavior as it progresses. It,
therefore, influences the notion that militaries should legally require their systems to be
certified before deployment to adhere to moral and legal requirements. Thus, ensuring
rigorous certification processes to ensure AWS meets ethical and legal standards before
deployment will negate the possibility of outright banning LAWS.114 Each system should
be approached case-by-case for certification because algorithms are not deterministic but
depend on the training data and environment.

Fourthly, consideration should be given to preprogram AWS with a moral code. Embedding
basic moral principles in AWS to prevent IHL violations and ensure ethical behavior can
assist in preventing breaches of the Geneva Convention and IHL. 115 Instead of having
laws to restrict an AWS’ behavior, it should be empowered to pick the best solution for
any given scenario based on the moral and legal code.116 In this way, it can perform even
better than humans.117 The question, however, remains as to what will happen if an AWS
is confronted with situations that have never been encountered before, such as in asym-
metric warfare.

110 cf Amorosa (n 84) 203.


111 Shannon Cooper, Damian Copeland and Lauren Sanders, ‘Methods to Mitigate Risks Associated with
the Use of AI in the Military Domain’ in Jan Maarten Schraagen (ed) Responsible Use of AI in Military
Systems (CRC Press, 2024) 141.
112 cf Cummings (n 35) 25-26.

113 cf Kajander et al. (n 24) 85.

114 cf Cummings (n 35) 25-26.

115 cf Kajander et al. (n 24) 85-86.

116 Salge as quoted by cf Kajander et al. (n 24) 86.

117 cf Arkin (n 59) 333.

241
Additionally, integrating Responsible Artificial Intelligence (RAI) into military training pro-
grammes as proposed by Kuennen,118 is essential. This approach focuses on developing
leaders who understand and manage AI's ethical and technical complexities in military
operations. Integrating RAI with professional character development programs for offic-
ers will develop responsible leaders in the future. This requires the relinquishing of the
assumption that ‘ethical algorithms’ are a panacea for RAI and to equip future officers
with the specific technical competence and moral virtues required. This argues for Cum-
mings’119 earlier reference to meaningful human control beyond merely technical com-
petence.

By addressing these areas, it is possible to develop a comprehensive framework focusing


on appropriate oversight, impact assessment, audit, and due diligence mechanisms, 120
that ensures AWS's ethical and lawful use, closing the current accountability gap. Seixas
Nunes121 states that a different approach will prevent the ‘scapegoat’ of military com-
manders or AWS for violating IHL caused by those systems.

7 Conclusion

This paper seeks to determine the nature of the accountability gap between man and
machine and to make recommendations on how this gap can be closed in the moral and
legal landscape of AI in military operations. It is clear that the extent of human involve-
ment in AWS—whether integrated, supervisory, or absent—raises ethical and legal ques-
tions. The retention of some meaningful human control (man-on-the-loop) in the critical
phases of the deployment of military AI is one of the central notions for the lawful and
ethical use of military AI. The best approach to uphold the laws of war involves a com-
bination of robust AI programming, human oversight, and clear accountability frame-
works. Revising international laws and adapting military protocols are essential to ad-
dress the challenges posed by autonomous weapon systems.

This will go beyond compliance with IHL but will include the development of clear and
better practices, such as defining and implementing the appropriate level of human con-
trol in autonomous systems. In addition, states must do everything reasonably in their
power to prevent the misuse of military AI that leads to violations of IHL, preventing
unauthorized use or misuse of technology to an end.

It is apt that IHL has shown a considerable capability to adapt its functional rules to meet
challenges presented by ‘newly’ developed weapon systems throughout its history. The
authors firmly believe that the existing rules of IHL can respond to military AI despite

118 Christopher S Kuennen, ‘Developing Leaders of Character for Responsible Artificial Intelligence in
The Journal of Character & Leadership Development’ [2023] Fall(10) 273. doi:
[Link]
119 cf Cummings (n 35) 22.

120 Jan Maarten Schraagen, ‘Introduction to Responsible Use of AI in Military Systems’ in Jan Maarten

Schraagen (ed) Responsible Use of AI in Military Systems (CRC Press, 2024) 9.


121 ibid 10.

242
vast differences in current opinion when interpreting these rules. Whether IHL princi-
ples are effectively translated into machine algorithms will depend on the specific ma-
chine and the situation at hand. However, international legal instruments should be de-
veloped and communicated urgently to guarantee accountability for any harm to pro-
tected interests caused by the conduct of military AI. Legal obligations and ethical re-
sponsibilities in war must not be outsourced to machines and software. Therefore, a gen-
uine human-centered approach should be followed to developing and using military AI
in conflict areas.

Thus, the debate on the development and use of AWS is still open, and the various parties
still have a role to play. Therefore, framing the debate better and designing meaningful
policies, procedures, and other governance remains essential. The international commu-
nity, governments, militaries, weapon manufacturers, universities (academics) and re-
search institutes must join this debate and collaboratively communicate to develop and
enact various policies, guidelines, regulations, and standards to regulate military AI sys-
tems.

It is worthy to conclude this discussion on moral responsibility and accountability with


the words of Giovanni Leoni, the Global Head of Algorithm and AI Ethics at Inter IKEA
Group:

…The increased focus on Responsible AI [also suggesting accountability] and


digital trust is a turning point in human history. No longer is technology only seen
as a value creator independent of humans, but rather that technology should
augment humans and [be] governed to serve the benefits of all humans. We can
together shape the future we want ….122

122 Giovanni Leoni. LinkedIn. 2022.


[Link]
O-5QB8m7P71ozQwT24-
Pc0joaebUI8RI&keywords=giovanni%20leoni&origin=RICH_QUERY_TYPEAHEAD_HISTORY&positi
on=0&searchId=579ccb0f-d284-40fd-8638-61a0af2c81f3&sid=Fpt Accessed on 24 December 2022.

243
RIDP
Military justice is an essential aspect of a nation’s defence system, rooted in a rich historical
08

Michelle Nel & Sonja Els (Eds.)


Gwenaël Guyon, Evert Kleynhans,
context that is intertwined with wider legal, political and social developments. Its
importance has increased in the 21st century due to the changing nature of war, the need
to protect civilians, the need to deal with misconduct by soldiers and to protect victims. It
is essential to understand the historical development of military justice in order to grasp its
complex legal and ethical dimensions, and avoid past mistakes.

Military justice serves to address misconduct within the armed forces, and to ensure

libri
discipline and compliance with ethical and international standards. Ongoing training in
military law is mandatory to prevent illegal actions and foster a culture of respect for legal
standards. One of the main objectives of military justice is also to protect civilians during
armed conflicts. International humanitarian law, including the Geneva Conventions, requires
the protection of non-combatants, and military justice systems help to ensure compliance
with these laws. Investigating and prosecuting violations, particularly those that endanger
civilians, helps to ensure accountability and maintain the legitimacy of military operations.
In the same way, advances in military technology, such as the use of drones and artificial
intelligence, pose new challenges for military justice. Legal frameworks must evolve to take
Gwenaël Guyon, Evert Kleynhans,

Military Justice: Contemporary, Historical and Comparative Perspectives


account of legal and ethical implications of these technologies. Additionally, warfare has
significantly transformed in recent years, with cyber warfare, private military companies
and counter-insurgency operations. Finally, contemporary military operations often involve
Michelle Nel & Sonja Els (Eds.)
coalitions of multiple countries, requiring harmonized approaches to military justice to
ensure consistency across different legal systems. Military Justice: Contemporary, Historical
The International Military Justice Forum (IMJF) provides a platform for global discussions on
and Comparative Perspectives
military justice, bringing together academics, practitioners, and military personnel. It fosters
comparative analysis of international military justice systems and explores their historical
and current evolution.
This volume brings together major contributions to the 2nd International Military Justice
Forum, which convened on 8 and 9 November 2023 in Stellenbosch, South Africa.

Gwenaël Guyon is Associate Professor in Legal History and Comparative Law at Saint-
Cyr Coëtquidan Military Academy, seconded from the University Paris Cité, and President
of the International Military Justice Forum.

Evert Kleynhans is Associate Professor in Military History at the Faculty of Military


Science of Stellenbosch University, and Honorary Researcher at the Centre for War and
Diplomacy at Lancaster University.

Michelle Nel is Associate Professor in Military Law at the Faculty of Military Science of
Stellenbosch University, and a part-time researcher at the Security Institute for Governance
and Leadership in Africa (SIGLA).

Sonja Els is Senior Lecturer in Mercantile Law and Military Law, and Chair of the School
for Human Resource Development at the Faculty of Military Science of Stellenbosch
University. Revue Internationale de Droit Pénal
RIDP
International Review of Penal Law
Revista internacional de Derecho Penal
[Link]
ISBN 978-90-466-1277-4 Международное обозрение уголовного права
刑事法律国际评论
libri 08

‫المجلة الدولية للقانون الجنائي‬


Revista Internacional de Direito Penal
Rivista internazionale di diritto penale
Internationale Revue für Strafrecht
9 789046 612774
MAKLU
MAKLU

View publication stats

You might also like