The Legal and Moral Landscape of Military Artificial Intelligence: Fixing The Accountability Gap Between Man and Machine
The Legal and Moral Landscape of Military Artificial Intelligence: Fixing The Accountability Gap Between Man and Machine
net/publication/387755341
The legal and moral landscape of military artificial intelligence: Fixing the
accountability gap between man and machine
CITATIONS READS
0 30
2 authors:
All content following this page was uploaded by Petrus Cornelius Bester on 06 January 2025.
Military justice serves to address misconduct within the armed forces, and to ensure
libri
discipline and compliance with ethical and international standards. Ongoing training in
military law is mandatory to prevent illegal actions and foster a culture of respect for legal
standards. One of the main objectives of military justice is also to protect civilians during
armed conflicts. International humanitarian law, including the Geneva Conventions, requires
the protection of non-combatants, and military justice systems help to ensure compliance
with these laws. Investigating and prosecuting violations, particularly those that endanger
civilians, helps to ensure accountability and maintain the legitimacy of military operations.
In the same way, advances in military technology, such as the use of drones and artificial
intelligence, pose new challenges for military justice. Legal frameworks must evolve to take
Gwenaël Guyon, Evert Kleynhans,
Gwenaël Guyon is Associate Professor in Legal History and Comparative Law at Saint-
Cyr Coëtquidan Military Academy, seconded from the University Paris Cité, and President
of the International Military Justice Forum.
Michelle Nel is Associate Professor in Military Law at the Faculty of Military Science of
Stellenbosch University, and a part-time researcher at the Security Institute for Governance
and Leadership in Africa (SIGLA).
Sonja Els is Senior Lecturer in Mercantile Law and Military Law, and Chair of the School
for Human Resource Development at the Faculty of Military Science of Stellenbosch
University. Revue Internationale de Droit Pénal
RIDP
International Review of Penal Law
Revista internacional de Derecho Penal
[Link]
ISBN 978-90-466-1277-4 Международное обозрение уголовного права
刑事法律国际评论
libri 08
Abstract
Artificial Intelligence (AI) increasingly creates an ethical and moral dilemma in traditional war-
fare. The irony is that new technology is introduced to make the battlefield more predictable, but
in fact, it has led to more sources of unpredictability caused by the risk of not fully understanding
the autonomous process of selection and engagement and of straining the link between intent and
outcome. This paper aims to determine the nature of the accountability gap between man and
machine powered by military AI, suggest how this gap can be closed within the legal and moral
landscape of AI in military operations, and share the findings with the academic community. Ar-
eas that will contribute to the development of a comprehensive framework to close the accounta-
bility gap are proposed. These areas include but is not limited to focus on the role of supplying
states and manufacturers, layered target identification, certification of autonomous weapons sys-
tems (AWS), preprogramming of AWS with moral codes and integrating Responsible Artificial
Intelligence (RAI) into military leadership development and training. The study is concluded, and
some recommendations are made.
1 Introduction
1 Peter Layton, ‘The Artificial Intelligence Battlespace’ (RUSI, 9 March 2021) The Artificial Intelligence
Battlespace | Royal United Services Institute ([Link]) accessed 23 December 2022, Vojtech Šimák, Michal
Gregor, Marián Hruboš, Dušan Nemec and Jozef Hrbček, ‘Why Lethal Autonomous Weapon Systems are
Unacceptable’ (IEEE 15th International symposium on Applied Machine Intelligence and Information,
Herl’any, 26-28 July 2017), Peter Elands, Marlijn Heijnen and Peter Werkhoven, ‘Operationalization of
Meaningful Human Control for Military AI: A Way Forward’ Position Paper for the REAIM 2023 summit
February 2023
[Link]
U8UUEAHXAUAX0QFnoECBMQAQ&url=https%3A%2F%[Link]%2Fpublication%2F34
640573%2FJAc5RL%[Link]%25C2%25A0&usg=AOvVaw1wq-
UKlPzfPOfDgt4Muhgp&opi=89978449>accessed 4 November 2023.
2 Ludovic Righetti, Quang Cuong Pham, Radhamadhavan Madhavan and Raja Chatila, ‘Lethal Autono-
mous Weapon Systems ’ [2018] 25(1) IEEE Robotics & Automation Magazine 123-126, Mann Virdee, ‘AI and
national security: What are the challenges in Britains World: The Council of Geostrategy’s online
225
because AI is ‘blind’ as it identifies a pattern and acts by dehumanisation and objectifi-
cation which can lead to automation bias.3 This independence is known as autonomy.4
Autonomy enables faster decision-making than in the past but challenges the quest to
determine who is responsible and accountable for its actions.
This issue of accountability in the use of military AI systems has sparked international
debate, particularly concerning compliance with international humanitarian law (IHL)
and the legal and ethical considerations about accountability. 5 The consensus is that AI
should not possess the authority over human life and death,6 creating a complex chal-
lenge in attributing responsibility by AI-driven decisions.
The push for international discourse began with the 2010 Berlin Statement by the Inter-
national Committee for Robot Arms Control (ICARC), highlighting concerns over loss of
human control in warfare.7 This was followed by other events such as the ’United Na-
tions Convention on Prohibitions or Restrictions on the Use of Certain Conventional
Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate
Effects.’8 Some authors9 debate the issue of whether AWS or semi-AWS systems should
be used at all, this paper assumes that it will be used in one AI form or another. The aim
is to focus on the concern about accountability rather than the legality of the weapon
system itself.
dignity when AWS are used. See also: Peter-Dean Baker, ‘Ethical Issues of Using Killer and Drones in
Warfare’ Web Seminar (2022), Filippo S de Sio and Jeroen van der Hoven, ‘Meaningful Human Control
Over Autonomous Systems: A Philosophical Account’ [2018] 5 Frontiers in Robotics and AI 5, Robert
Sparrow, ‘Killer Robots’ [2007] 24(1) Journal of Applied Philosophy 62-77, Michael Skerker, Duncan Purves
and Ryan Jenkins, ‘Autonomous Weapon Systems and the Moral Equality of Combatants’ [2020] 3(6)
Ethics and Information Technology 197, Ilse Verdiesen, ‘Comprehensive Human Oversight over
Autonomous Weapon Systems’ (D Phil Thesis, Delft University of Technology 2024) 19.
6 Marc Cannellos and Rachel Haga, ‘Lost in Translation: Building a Common Language for Regulating
Autonomous Weapons’ [2016] 53(3) IEEE Technology and Society Magazine 53, ICRC UK and Ireland, ‘What
you need to know about artificial intelligence in armed conflict’ (ICRC, 6 October 2023)
<[Link]
accessed 3 November 2023.
7 Cannellos and Haga (n 6) 51.
8 Fabio Massacci and Silvia Vidor, ‘Builing Principles for Lawful Cyber Lethal Autonomous Weapons’
226
Key issues identified in this debate include defining autonomy, determining the neces-
sary level of control for lawful AWS use, establishing accountability frameworks, and
certifying permissible AWS.10 Additionally, maintaining human oversight in AI deploy-
ment presents a challenge, such as addressing the responsibility gap, aligning AI behav-
ior with stakeholder values, and ensuring human performance is consistent with ethical
standards.11 Cannellos and Haga12 assert that there are many benefits to international
regulation of AWS, such as using similar standards and ensuring adherence to interna-
tional law.
This paper will begin by clarifying the core concepts, then discuss the implications of
military AI in autonomous warfare, followed by an exploration of the role of humans in
relation to military AI systems. It will also review the existing moral and legal frame-
works guiding ethical use of AI-enabled weapons. The final section will offer recommen-
dations to adjust these frameworks to address the accountability gap. This research seeks
to contribute to the academic and military discourse by focusing on the under-researched
area of accountability in military AI.
2 Concept clarification
This discussion is situated within the context of military ethics and IHL. Military ethics
refers to a systematic process of weighing and explicating morality, which is made up of
human values and norms regarding discipline, integrity, and honesty, which guide the
military community in making the right choices in their work context.
10 Cannellos (n 7) 52.
11 Pieter Elands, Marlijn Heijnen and Peter Werkhoven. ‘Operationalization of Meaningful Human Con-
trol for Military AI’ [2023] Position Paper for the REAIM 2023 Summit 1-9.
12 cf Cannellos et al. (n 7) 5252.
13 ibid 55-56.
14 ibid 56.
227
and trajectory management functions.15 The same principle can thus be applied to AWS
in general and, more specifically, LAWS.
Verweij,16 a respected scholar in military ethics, states that a person demonstrates moral
responsibility if they recognize the ‘other’ and the appeal they make to them. Moral re-
sponsibility is obedience towards a bureaucratic authority in line with local circum-
stances.17 More specifically, within the context of AWS, moral responsibility is based on
whether and under which conditions humans are in control and, therefore, responsible
for their everyday actions.18
Control can be understood when referring to the work of the philosopher Dennett, who
states that A controls B only if the relation between A and B is such that A can drive B
into whichever of B’s normal range of states that A wants (intends) B to be in. 19 Two
broad views on control and moral responsibility can be postulated philosophically.20
Firstly, the incompatibilists deny the compatibility of causal explanations of human ac-
tions and human moral responsibility in that causality and human moral responsibility
cannot be reconciled. Secondly, the compatibilists believe that humans may be morally
responsible for some of their actions even if they do not possess any special metaphysical
power to escape the causal influences of their behavior. Control and accountability are
closely tied; if one has control, one has accountability.21
2.3 Accountability
15 ibid.
16 Désirée Verwij, ‘Introduction: Ethics and Military Practice’ in Désirée Verwij, Peter Olsthoorn and Eva
van Baarle (eds), Ethics and Military Practice (Brill Nijhoff, 2022) 1-14.
17 Eric-Hans Kramer, Herman Kuipers and Miriam de Graaff, ‘An Organisational Perspective on Military
19 Peter-Dean Baker, ‘Ethical Issues of Using Killer Robots and Drones in Warfare’ Online Seminar (31
August 2022).
20 De Sio and van der Hoven (n 5) 4.
21 Baker (n 19).
22 Ilse Verdiesen, ‘Comprehensive Human Oversight over Autonomous Weapon Systems’ (D Phil Thesis,
228
Sometimes, one does not have control, but can still be accountable because one should
have control.23 The possibility of human intervention (also read control) is crucial for ac-
countability when it comes to the actions of an autonomous system. 24 If a human can
intervene and/or influence the autonomous system, it would thus not be viewed as truly
autonomous. However, when a system has no human overrides, it creates what Kajander
et al.25 call the accountability gap, which is the focus of this paper. There is, thus, a direct
relationship between decision-making and accountability.
Any decision made on life or death that a soldier may be confronted with is moral. Any
moral decision is guided by what one should do based on norms and subsequently has
some form of moral consequence.26 Hence, military AI is essential in moral or ethical
decision-making in military practice.27
Baker states that based on the arguments by De Sio and van den Hoven, 28 there are
mainly three kinds of objections to providing full autonomy to a machine: (1) robots (AI
systems) are not and will likely not be able to make the practical and moral distinctions
required by the laws of armed conflict, for example, to distinguish between combatants
and non-combatants, and apply proportionality in the use of force. For this reason, dele-
gating military tasks to robots may contaminate military operations; (2) there is some-
thing fundamentally wrong with leaving the decisions of life and death of people to a
machine, and it is fundamentally mala in se (evil in itself); (3) in the case of war crimes or
fatal use of weapons, it might be impossible to hold anyone military and morally respon-
sible,29 and ultimately accountable.
The above thus leads to the question of who the responsible parties would be. One can
identify several role players in terms of responsibility, namely the manufacturer, the pro-
grammers, the seller of the AWS, the operator (in limited cases), and the user when, for
example, negligence leads to a failure, or even the machine itself.30 Similarly, all weapon
systems are designed and activated by human beings who can be held accountable if
23 Baker (n 19).
24 Aleksi Kajander, Agnes Kasper and Evhen Tsybulenko, ‘Making the Cyber Mercenary-Autonomous
Weapon Systems and Common Article 1 of the Geneva Conventions ’ in Tat’ána Jančárková, Lauri
Lindström, Massimiliano Signoretti and Gábor Visky (eds), 20/20 Vision: The Next Decade-12th International
Conflict on Cyber Conflict (Talin, 2022) 85.
25 ibid.
26 Christine Boshuijzen-Van Burken, ‘Ethics and Technology’ in Verwij, Olsthoorn and van Baarle (n 16)
71-72.
27 ibid 70.
29 Baker (n 19).
229
their ‘creations’ do not comply with the law of armed conflict (LOAC). 31 Current princi-
ples of International Criminal Law (ICL) provide for various classes of participation in
crimes (such as co-perpetration and aiding and abetting). Acquaviva32 concludes that it
will be up to the judges to decide on the facts of each case, how the individual partici-
pated in an international crime and what degree of culpability should be assigned to the
conduct. Criminal blame can theoretically be ascribed to any of the abovementioned in-
dividuals. Furthermore, in such a combined responsibility concept, the degree of respon-
sibility will inevitably impact the sentencing phase, where an assessment will be made
to determine how much punishment the manufacturer, programmer, seller or operator
‘deserve’ for their contribution.33
Autonomous weapons systems have received the most attention when it comes to the
use of military AI.34 First and foremost, it is vital to distinguish autonomous systems from
automated systems. An automated system operates by clear, repeatable rules based on
unambiguous sensed data. In contrast, an autonomous system takes in data about the
unstructured world around them, processes the data to generate information, generates
alternatives and makes decisions in the face of uncertainty.35 Autonomous systems range
from fully automated to fully autonomous. Semi-autonomous weapons have been used
in warfare for many years.36 These systems are generally limited in their mobility as some
may be static systems which are either unmoving or move only in preprogrammed areas
or are endowed in transport systems piloted by humans (such as an autopilot on an aer-
oplane) which can hardly operate without any human operator. At the opposite end,
there are weapons systems where man is no longer used, such as uncrewed ground ve-
hicles (UGRVs) and uncrewed aerial vehicles (UAVs). 37 From an overview of the litera-
ture,38 AWS, whether lethal or not, can be defined as weapon systems able to make target
selection, including research, detection, identification, tracking and acting, which in-
clude but are not limited to target attack (use of force, neutralize, damage destruct). Thus,
33 ibid 107.
34 ICRC UK & Ireland. ‘What You Need to Know About Artificial Intelligence in Armed Conflict’ (2023)
37 Peter Layton, ‘The Artificial Intelligence Battlespace’ (RUSI, 9 March 2021) < The Artificial Intelligence
‘Jus In Bello and Jus Ad Bellum Arguments Against Autonomy in Weapon Systems: A Re-Appraisal’
[2017] 43 Quil, Zoom 5-31. 9, cf ICRC UK & Ireland (n 34).
230
when a system is fully autonomous, all decisions are delegated to the system itself. The
link to military AI is therefore clear.
There are some concerns and criticism against AWS, although some argue that using
decision support systems can help human decision-making in a way that facilitates com-
pliance with IHL and minimizes the risk for civilians.39 With AWS, it is easier to dehu-
manize the enemy, viewing them as targets or collateral damage stripped from human
characteristics such as love, humor, having a family, feelings and others, giving up on
the notion of a human enemy. The concern of autonomous weapons relates not only to
the humans involved but also to the ‘machine’ or system itself because machines lack
specific human characteristics.40 Machines do not possess virtues such as sympathy or
mercy, lack any inhibitions against violence and self-sacrifice, make decisions without
emotion, and have no sense of mortality and conscience.41 Thus, it is difficult to grasp
how humans can act responsibly if they allow machines to bear on ending human life.
Furthermore, the unpredictability of new technology is a valid concern. Hence, the exist-
ing framework of laws was not created with autonomous systems in mind, and its appli-
cation to autonomous systems, especially in the military domain, leads to legal uncer-
tainties with the use of military AI.42 IHL places responsibility and accountability on hu-
mans, but increasing autonomy complicates the identification of the legally responsible
person.43
AI involves using computer systems to execute tasks that require human cognition plan-
ning or reasoning, such as directly triggering a strike against a person or a vehicle.44 Al-
gorithms, a set of instructions or rules that a computer or machine must use to respond
to a question or solve a problem, are the foundation of an AI system.
Furthermore, machine learning is an AI system that creates its instructions based on the
data it is ‘trained’ on. These instructions are then used to generate a solution to a partic-
ular task. In essence, the software writes itself to improve itself based on the inputs from
the environment in which it operates. Machine learning is more than the so-called ‘rule-
based algorithms’, as it does not always respond in the same way to the same input and
is thus unpredictable. The ICRC UK and Ireland 45 highlight the challenge of machine
45 ibid.
231
learning systems because even if the inputs are known, it can be challenging to explain
retrospectively why the system produces a particular input. This so-called ‘black box’
system thus makes it difficult to pinpoint responsibility and accountability.
Layton46 quotes Robert Work, who declares: ‘The reason to pursue AI is to pursue
autonomy. AI weapon systems might apply lessons learned in the battlespace to de-
velop their criteria against which to recognize a target, or they may observe and
record the pattern of life in the target area and subsequently apply their observations
and pre-learned lessons to decide what to attack.’ Therefore, some militaries are us-
ing AI to improve the speed and accuracy of their battlefield decision-making,
through automating command and control systems and predictive operational plan-
ning tools that address intelligence, surveillance, and reconnaissance. It is expected
that the first country to understand AI adequately enough to change the human-
centered force structures and embrace AI warfighting may gain a considerable ad-
vantage.
Various authors identified certain shortcomings with AI. The most noteworthy are: hu-
mans can produce better results under certain circumstances, machines find it hard to
handle minor context changes, AI systems can be easily fooled, it is delicate in working
in contexts it was only trained for; hence the difficulty to apply cross contextual
knowledge across different contexts, limited capacity to generalize from little infor-
mation; and humans make better judgments in environments of uncertainty. 47 Further-
more, existing computer vision systems relying on machine learning are not better than
humans to correctly distinguish between criteria in high uncertainty target identification
scenarios.48
Despite the abovementioned challenges, the advantages of AI and AWS are exponential.
Mirza et al. 49 summarise it in terms of its use: Economically AWS is more cost-effective
than human-driven weapon systems for example when comparing the operational cost
of a Predator UAV, approximately $100 per hour in 2016, to that of a standard human-
crewed tactical aircraft with a minimum cost of $1500 per hour in 2016; Tactically AWS
does not risk a human pilot's life and excludes the risk of becoming a prisoner of war;
Technically the radar signature of UAVs is much smaller than human-crewed aircraft
and satellites. This enables them to fly closer to the earth’s surface and remain over the
area for longer; Militarily AWS has transformed the warfare landscape mainly due to its
ability to strike, suppress and destroy. As a result, it may enhance mission effectiveness;
46 Layton (n 37).
47 ibid 13-15.
48 Cummings (n 35) 25.
49 Muhammad Nadeem Mirza, Irfan Hasnain Qaisrani, Lubna Abid Ali and Ahmad Ali Naqvi.
‘Unmanned Aerial Vehicles: A Revolution in the Making’ [2016] 31(2) Research Journal of South Asian
Studies 249-250.
232
and lastly, in the civilian arena, vast possibilities exist, including coastguard search and
rescue, border surveillance, and firefighting.
When considering the notion that AI seemingly accelerates decision-making the short-
comings of AI referred to in the preceding paragraph resonate with the research problem
this paper addresses, namely, who will be held accountable for wrong decision-making
that leads to the loss of innocent lives. This links closely to the role of man in AWS as
identified at the Convention of Certain Conventional Weapons.50
AI-enabled systems have replaced many traditional human tasks, offering increased ef-
fectiveness, efficiency and speed. Layton51 describes three modes of autonomy in AWS:
man-in-the-loop, man-on-the-loop, and man-out-of-the-loop.
Man-in-the-loop systems involve human control over specific functions, such as target en-
gagement and attacking, ensuring non-autonomous operation for example, through a
remote-control device.52 Despite continuous human input, these systems are still viewed
as part of autonomous systems as the system has some form of autonomy or uses AI.
Human interface is crucial in various mission phases with ‘robots’ acting on human com-
mands. Examples include the U.S. Predator MQ-1, operated remotely from afar, which
can lead to emotional and moral detachment consequences.53 Thus, humans retain con-
trol over specific functions, and determining accountability would not be problematic.
The human soldier will decide whether the use of force is justifiable according to stand-
ard principles of LOAC and may be prosecuted for war crimes if the conduct does not
meet the required threshold.
Man-on-the-loop systems grant AI control over operations, with possible human interven-
tion.54 These systems, which may notify humans before action, require continuous mon-
itoring to offset AI limitations. Rapid AI response times and possible automation bias
pose challenges.55 Human supervision aims to ensure lawful operations, with accounta-
bility determined case by case and guided by the principles of LOAC. It would thus not
be problematic to determine accountability.
Rachel Kerr and Guglielmo Verdirame (eds), Routledge Handbook of War and Technology (Routledge, 2019)
32.
53 Noel Sharkey, ‘Saying ”No!“ to Lethal Autonomous Targeting’ [2010] 9(4) Journal of Military Ethics 369–
370.
54 McDonald (n 52) 32.
233
these gaps by holding humans accountable for deploying AWS despite foreseeable
risks.56 Legal frameworks such as dolus eventualis and willful blindness are proposed to
ensure accountability.57 Suggested legal changes include amending international statutes
to address AWS-specific offences, focusing on delegating the decision to kill. Therefore,
the next part briefly refers to the latter.
Righetti et al.58 argue that delegating the decision to kill to an algorithm undermines hu-
man dignity and ethical norms. They assert that allowing algorithms to make life-and-
death decisions necessitates setting strict limitations on the capabilities grounded in eth-
ical considerations. The primary goal is to balance morality and legality in warfare, en-
suring that AWS comply with international law while maintaining superior ethical per-
formance compared to human soldiers. Arkin59 also supports this view.
Furthermore, Righetti et al.60 stress the need for ethical norms for autonomous technolo-
gies, including LAWS. They propose developing a moral framework to guide the ethical
use of lethal weapons. Similarly, the ICRC and Ireland 61 urge governments to adopt in-
ternational rules to prohibit or restrict certain AWS, including those controlled by mili-
tary AI. Ascott62 suggests a rule-based moral framework as the best approach, while Ca-
nellos and Haga63 warn against the illusion of human control and ambiguous responsi-
bility, complicating accountability.
A robust legal and moral framework is necessary to guide LAWS use. Before suggesting
such a framework, one must examine the existing framework that guides the use of lethal
weapons.
5 The existing moral and legal framework guiding the ethical use of weapons
A military AI agent like an AWS cannot understand the value of human life because it
lacks the experience of having personal projects or sensing mortality. 64 A duty-holder
cannot transfer duties to an entity incapable of bearing those duties and privileges. Sim-
ilarly, a human combatant cannot transfer his privileges of targeting enemy combatants
59 Ronald C Arkin. ‘The Case for Ethical Autonomy in Unmanned Systems’ [2010] 9(4) Journal of Military
62 Tom Ascott, ‘Should Robots Kill? Towards a Moral Framework for Lethal Autonomous Weapon Sys-
64 Michael Skerker, Duncan Purves and Ryan Jenkins, ‘Autonomous Weapon Systems and the Moral
234
to a military AI system. Therefore, any human duty-holder who deploys AWS is guilty
of breaching the agreement between human combatants and, in that way, disrespects the
targeted combatants.65
Within the traditional context, the process of target selection and engagement is substan-
tially within the hands of humans.66 It is human endeavors to decide on the status of a
potential target as they go through a process of initial target identification, utilizing ma-
terials that decision-makers have recourse to and a process of questioning assumptions.
This process is changing with the introduction of military AI as battlefield target identi-
fication and selection decisions are increasingly being delegated to machine sensors and
algorithms.67 The importance of human review and override with specific functions is
still required.68
Van Benthem69 points out that, for now, there is no consensus around the applicability
of international law on autonomous systems. She states that agreeing on applying the
law is not the issue, but how the law applies is a very complex exercise. It is, therefore,
necessary to look first into the standards of morality and legality and then interrogate
international law in general and, more specifically, the LOAC.
The use of lethal autonomous systems requires some form of moral framework to guide
their ethical use.70 Ethics is integral to using military AI, 71 and from an international law
perspective, a major legal concern for LAWS is whether it can be compatible with IHL,
particularly with the core principles of distinction, proportionality and precaution.
McDonald73 highlights the critical role of moral and legal decisions in military ethics,
divided into jus ad bellum (resort to war) and jus in bello (conduct in war). He posits that
the choice is analyzed regarding state decision-making (choice to use armed force or en-
gage in war) or individual decision-making (whether the soldier’s conduct is morally
65 ibid 201.
66 Tsvetelina J Van Benthem, 2022. Exploring Changing Battlefields: Autonomous Weapons, Unintended
Engagements and the Law of Armed Conflict (NATO CCDCOE Publications, 2022) 198.
67 ibid.
68 Richard Loe, Christopher Maracchion and Andrew Drozd, ‘Semi-Autonomous Management of Multi-
ple Ad-Hoc Teams of UAVs’ [2015] IEEE Symposium on Computational Intelligence for Security and Defense
Applications (CISDA) 1.
69 Van Benthem (n 66) 201.
70 Ascott (n 62).
71 Layton (n 37) 7.
235
justifiable). The LOAC is guided by four principles: military necessity (to use force oth-
erwise prohibited), distinction (between combatants, non-combatants and objects), pro-
portionality (to minimize collateral damage), and humanity (to prevent unnecessary suf-
fering).74 These principles aim to minimize civilian harm and ensure lawful conduct in
armed conflict.
Briefly considering aviation, LAWS, such as drones, may breach international air law.
The 1944 Chicago Convention and subsequent treaties, like the Beijing Protocol, regulate
drone safety and security, criminalizing unauthorized control and the use of aircraft for
harm. LOAC usually governs military aircraft during conflicts. 77
AWS raises significant concerns regarding adherence to the principles of distinction and
proportionality, particularly regarding civilian harm.78 Challenges include identifying
combatants and distinguishing military and civilian objects.79 This can easily lead to al-
legations of conducting war crimes. Therefore, assessing war crimes and command re-
sponsibility is crucial in this context.
Human combatants may make errors and behave unethically, but eventually, they can
be held accountable. However, AWS cannot be held accountable as robots do only what
they are programmed to do and cannot be punished. Hence, who is responsible along
the causal chain – the manufacturer, the designer, the defense force, the general in charge
74 Terry D Gill and Dieter Fleck (eds), ‘The Handbook of the International Law of Military Operations’
(2nd edn, Oxford University Press, 2015) 36.
75 Righetti et al. (n 2) 125.
77 Robin Geib and Nils Melzer (eds), The Oxford Handbook of the International Law of Global Security (Oxford
236
of the mission or the operator? According to IHL, these AWS cannot be used if a human
cannot be held accountable.80
Facing constantly evolving technology and science, international courts often use meth-
ods of dynamic interpretation of international treaties and principles of international law
to enhance their full effect.81 The same is apt for human rights treaty bodies. Bannelier82
argues that human rights treaties are ‘living instruments’ that constantly undergo dy-
namic interpretation due to changing environments.
The notion of prosecuting soldiers for war crimes marked a shift from ‘concern to con-
demnation’ in the modern world.83 The military must reflect society. Thus, soldiers are
judged by morality and legality. Moreover, the permanent International Criminal Court
enhances the notion of state responsibility but, more importantly, individual criminal
responsibility.
War crimes are closely linked to command responsibility. Traditionally, from the per-
spective of customary international law, command responsibility denotes that the mili-
tary commander may be held criminally responsible for war crimes committed by his (or
her) subordinates if the commander knew, or should have known, that the subordinate
was committing or is about to commit such crimes and, notwithstanding, failed to pre-
vent the commission thereof and/or failed to punish the perpetrators.84 ‘Knowledge’ im-
plies actual or constructive knowledge. In traditional warfare, the essence of command
responsibility is accountability for all actions and results. Guiora 85 warns that applying a
similar model to autonomous warfare is essential. However, command responsibility
implies the availability of reasonable information at the time of the commander’s deci-
sion.
As mentioned earlier, to accomplish the aim of the primary principles of LOAC, all fea-
sible precautions must be taken during the planning phase and during an attack. 86 This
also includes the decision to cancel or suspend an attack when the collateral effects of the
80 James Gow, Ernst Dijxhoorn, Rachel Kerr and Guglielmo Verdirame (eds), Routledge Handbook of War
and Technology (Routledge, 2019) 33.
81 Kai Ambos, ‘International Criminal responsibility in Cyberspace’ in Nicholas Tsagourias and Russel
Buchan (eds) Research Handbook on International Law and Cyberspace (Elgar, 2021) 172.
82 Karine Bannelier, ‘Is the principles of distinction still relevant in cyberwarfare? From doctrinal dis-
course to State practices’ in Nicholas Tsagourias and Russel Buchan (eds) Research Handbook on Interna-
tional Law and Cyberspace (Elgar, 2021) 447.
83 cf Gow et al. (n 80) 16.
84 Daniele Amoroso ‘Jus In Bello and Jus Ad Bellum Arguments Against Autonomy in Weapon Systems:
tions, Proportionality and the Notion of “Attack” Under the Humanitarian Law of Armed Conflict’ in
Nicholas Tsagourias and Russel Buchan (eds) Research Handbook on International Law and Cyberspace (Elgar,
2021) 457.
237
attack are likely to be excessive in relation to the military advantage anticipated. Gill 87
consider this as ‘proportionality in a somewhat different context’, namely, to choose the
time, method and means of attack least likely to cause excessive collateral damage or
injury. The standard of the onus is that of a ‘reasonable commander/combatant’ acting
bona fide on the information reasonably available at the time the attack is planned or
conducted.88 It requires a continuous assessment throughout the attack. Article 57(1) of
Additional Protocol I require constant care to spare the civilian population and objects.
Gow et al.89 reason that this obligation will bind everyone involved in the mission's plan-
ning phase.
Consequently, some states and militaries have addressed the accountability gap in legis-
lation and policies. For example, the United Kingdom (UK) policy dictates that for all
weapon systems, ‘legal responsibility for any military activity remains with the last per-
son to issue the command authorising a specific activity’. 91 The first U.S. policy on au-
tonomy in weapon systems, Directive Number 3000.09, stipulates that ‘Autonomous and
semi-autonomous weapon systems shall be designed to allow commanders and opera-
tors to exercise appropriate levels of human judgment over the use of force’.92 Massacci
and Vidor, 93 however, warn about the level of human control and indicate that within
the Boeing MAX incident, man did not have the power to interfere in the software sys-
tem, although he was aware that something was wrong. Following the debate on ‘mean-
ingful human control’94 over AWS at the 2014 UN Convention on Certain Conventional
Weapons, two competing ideas emanate, namely those supporting the claim from Spar-
row that robots cannot be held responsible for their actions on the one hand and follow-
87 Terry D Gill, ‘International Humanitarian Law Applied to Cyber Warfare: Precautions, Proportionality
and the Notion of “Attack” Under the Humanitarian Law of Armed Conflict’ in Nicholas Tsagourias and
Russel Buchan (eds) Research Handbook on International Law and Cyberspace (Elgar, 2021) 464.
88 ibid.
90 ibid, 40.
91 ibid 42.
92 ibid 142.
94 cf Cummings (n 34) 21-22 states that meaningful human control means that a human has to monitor a
weapon until impact and may perhaps have the ability to remotely abort the mission. It is also about the
role allocation between humans and autonomous weapon systems.
238
ers of Schulzke arguing that command responsibility sufficiently addresses responsibil-
ity for non-human agents.95 McDonald96 concluded that there is too much focus upon
‘bottom-up’ compliance seeking adherence to LOAC through individual ground-level
decisions. According to him,97 existing military structures already ensure ‘top-down’
compliance, and he argues that the combination of these two concepts is sufficient to
adhere to LOAC principles of responsibility. McDonald98 acknowledges the LAWS’ chal-
lenge to meaningful ‘bottom-up’ human control, but he firmly believes that professional
militaries will not employ LAWS in such a manner to violate LOAC.
Irrespective of the two conflicting streams of thought, all agree that, following the com-
mission of war crimes, it is crucial that humans will remain responsible for these viola-
tions. The crux of command responsibility is the notion that lethal decisions can be at-
tributed to human beings.99 Considering the complex nature of modern warfare, from a
practical perspective, it might be difficult to pinpoint the exact individual that must face
responsibility. McDonald provides the example of current (acceptable) aerial warfare of
a fast jet pilot being instructed on a target by an air controller on the ground and reasons
that:
… unless the pilot is in possession of information that the use of force would be
unlawful, there is the expectation that the pilot would use force against the target
described to him by the air controller. In this instance, the very nature of military
operations reduces the pilot’s autonomy.100
Thus the practice of warfare implies that decision-making is often distributed between
participants.101
Amoroso declares that command responsibility is not the solution to the accountability
conundrum but proposes a ‘ … shift in the accountability focus from the deployment to
98 ibid.
99 ibid 144.
100 ibid.
101 ibid.
239
the development/procurement phase …’103 Hence, Amorosa104 suggests that war crime
responsibility should primarily lie with military procurement managers, weapon devel-
opers and legal advisors. The authors disagree with this suggestion as it will result in an
absurdly long causal chain of responsibility.
The following section suggests some considerations for developing a robust moral and
legal framework to bridge the accountability gap related to AWS.
To address the accountability gap in the use of AWS and LAWS, it is crucial to examine
the principles of morality and legality in their application. Key issues include the ability
of machines to distinguish civilians from combatants, ensure compliance with IHL, and
attribute criminal responsibility in case of failure.
The principle of distinction, a cornerstone of the LOAC raises questions about whether
machines can accurately identify legitimate targets.105 Predictability concerns focus on
the machine’s behavior in unpredictable scenarios. Accountability questions revolve
around tracing failures, attributing responses, and accountability for IHL violations. Fur-
thermore, willful human action is a requirement for the criminal liability of individuals
in the event of all violations of IHL, and Article 30 of the Rome Statute addresses this
issue related to ‘intent and knowledge.’106 Subsequently, meaningful human control of
AI systems is required to comply with existing normative frameworks to justify violence
in war.107 Military commanders’ decision-making is always subject to disciplinary scru-
tiny due to the direct relationship between decision-making and accountability. The le-
gitimacy of a military action demands that accountability be integral to its decision-mak-
ing. Van Benthem108 refers to the term intent, which Boothy states is a precondition for
violating the fundamental rules of the LOAC, such as prohibiting the attack of civilians.
This view complicates issues related to the amount or quality of human control necessary
for the lawful use of LAWS, which is AI-enabled.109 Subsequently, these issues should be
considered, and IHL should be re-assessed when developing a moral and legal frame-
work to close the accountability gap, where more role players are involved than just sol-
diers on the battlefield.
First and foremost, when considering AI systems, one must address the role of supplying
states and manufacturers. This will require extending the Geneva Convention’s Common
Article 1 to ensure manufacturers design AWS that cannot violate IHL and incorporate
mechanisms for remote shutdown. Hence, in November 2017, the state parties to the UN
106 Rebecca Crootof, ‘War Torts: Accountability for Autonomous Weapon Systems in University of Penn-
240
Convention on Conventional Weapons established an open-ended Group of Govern-
mental Experts to explore recommendations on addressing LAWS in a legally binding
international instrument.110 The outcome of this endeavor may provide the answer to the
accountability gap predicament. The extension of the Geneva Convention’s Common Ar-
ticle 1 addresses the issue that responsibility and accountability will begin during the
manufacturing of the system which should, for example, build in an overriding ‘back-
door’ to prevent the unethical use or misuse of systems. Similarly, the Australian Depart-
ment of Industry, Science and Energy111 emphasizes the identifiability and accountability
of individuals responsible for the different phases of the AI life cycle of AI systems and
that human oversight should be enabled.
Another option is to look at layered target identification.112 The implementation of dual lay-
ered target identification, where humans make strategic decisions and AWS execute
them, ensuring clear lines of accountability. Such a system can, for example, create a log
to store all orders from human operators.113
Thirdly, attention should be given to the certification of AWS. AI combined with machine
learning enables the weapon system to learn and modify its behavior as it progresses. It,
therefore, influences the notion that militaries should legally require their systems to be
certified before deployment to adhere to moral and legal requirements. Thus, ensuring
rigorous certification processes to ensure AWS meets ethical and legal standards before
deployment will negate the possibility of outright banning LAWS.114 Each system should
be approached case-by-case for certification because algorithms are not deterministic but
depend on the training data and environment.
Fourthly, consideration should be given to preprogram AWS with a moral code. Embedding
basic moral principles in AWS to prevent IHL violations and ensure ethical behavior can
assist in preventing breaches of the Geneva Convention and IHL. 115 Instead of having
laws to restrict an AWS’ behavior, it should be empowered to pick the best solution for
any given scenario based on the moral and legal code.116 In this way, it can perform even
better than humans.117 The question, however, remains as to what will happen if an AWS
is confronted with situations that have never been encountered before, such as in asym-
metric warfare.
241
Additionally, integrating Responsible Artificial Intelligence (RAI) into military training pro-
grammes as proposed by Kuennen,118 is essential. This approach focuses on developing
leaders who understand and manage AI's ethical and technical complexities in military
operations. Integrating RAI with professional character development programs for offic-
ers will develop responsible leaders in the future. This requires the relinquishing of the
assumption that ‘ethical algorithms’ are a panacea for RAI and to equip future officers
with the specific technical competence and moral virtues required. This argues for Cum-
mings’119 earlier reference to meaningful human control beyond merely technical com-
petence.
7 Conclusion
This paper seeks to determine the nature of the accountability gap between man and
machine and to make recommendations on how this gap can be closed in the moral and
legal landscape of AI in military operations. It is clear that the extent of human involve-
ment in AWS—whether integrated, supervisory, or absent—raises ethical and legal ques-
tions. The retention of some meaningful human control (man-on-the-loop) in the critical
phases of the deployment of military AI is one of the central notions for the lawful and
ethical use of military AI. The best approach to uphold the laws of war involves a com-
bination of robust AI programming, human oversight, and clear accountability frame-
works. Revising international laws and adapting military protocols are essential to ad-
dress the challenges posed by autonomous weapon systems.
This will go beyond compliance with IHL but will include the development of clear and
better practices, such as defining and implementing the appropriate level of human con-
trol in autonomous systems. In addition, states must do everything reasonably in their
power to prevent the misuse of military AI that leads to violations of IHL, preventing
unauthorized use or misuse of technology to an end.
It is apt that IHL has shown a considerable capability to adapt its functional rules to meet
challenges presented by ‘newly’ developed weapon systems throughout its history. The
authors firmly believe that the existing rules of IHL can respond to military AI despite
118 Christopher S Kuennen, ‘Developing Leaders of Character for Responsible Artificial Intelligence in
The Journal of Character & Leadership Development’ [2023] Fall(10) 273. doi:
[Link]
119 cf Cummings (n 35) 22.
120 Jan Maarten Schraagen, ‘Introduction to Responsible Use of AI in Military Systems’ in Jan Maarten
242
vast differences in current opinion when interpreting these rules. Whether IHL princi-
ples are effectively translated into machine algorithms will depend on the specific ma-
chine and the situation at hand. However, international legal instruments should be de-
veloped and communicated urgently to guarantee accountability for any harm to pro-
tected interests caused by the conduct of military AI. Legal obligations and ethical re-
sponsibilities in war must not be outsourced to machines and software. Therefore, a gen-
uine human-centered approach should be followed to developing and using military AI
in conflict areas.
Thus, the debate on the development and use of AWS is still open, and the various parties
still have a role to play. Therefore, framing the debate better and designing meaningful
policies, procedures, and other governance remains essential. The international commu-
nity, governments, militaries, weapon manufacturers, universities (academics) and re-
search institutes must join this debate and collaboratively communicate to develop and
enact various policies, guidelines, regulations, and standards to regulate military AI sys-
tems.
243
RIDP
Military justice is an essential aspect of a nation’s defence system, rooted in a rich historical
08
Military justice serves to address misconduct within the armed forces, and to ensure
libri
discipline and compliance with ethical and international standards. Ongoing training in
military law is mandatory to prevent illegal actions and foster a culture of respect for legal
standards. One of the main objectives of military justice is also to protect civilians during
armed conflicts. International humanitarian law, including the Geneva Conventions, requires
the protection of non-combatants, and military justice systems help to ensure compliance
with these laws. Investigating and prosecuting violations, particularly those that endanger
civilians, helps to ensure accountability and maintain the legitimacy of military operations.
In the same way, advances in military technology, such as the use of drones and artificial
intelligence, pose new challenges for military justice. Legal frameworks must evolve to take
Gwenaël Guyon, Evert Kleynhans,
Gwenaël Guyon is Associate Professor in Legal History and Comparative Law at Saint-
Cyr Coëtquidan Military Academy, seconded from the University Paris Cité, and President
of the International Military Justice Forum.
Michelle Nel is Associate Professor in Military Law at the Faculty of Military Science of
Stellenbosch University, and a part-time researcher at the Security Institute for Governance
and Leadership in Africa (SIGLA).
Sonja Els is Senior Lecturer in Mercantile Law and Military Law, and Chair of the School
for Human Resource Development at the Faculty of Military Science of Stellenbosch
University. Revue Internationale de Droit Pénal
RIDP
International Review of Penal Law
Revista internacional de Derecho Penal
[Link]
ISBN 978-90-466-1277-4 Международное обозрение уголовного права
刑事法律国际评论
libri 08