Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Paper for the 2011 conference on The Ethics of Emerging Military Technologies, organized by The International Society for Military Ethics, and hosted by the University of San Diego.
AI
The paper discusses ethical issues surrounding the use of autonomous systems, particularly in military applications such as unmanned aerial vehicles (UAVs) and robotic weapons. It explores the definition and distinction between autonomous systems and robots, the psychological impact on military personnel operating these technologies, and cultural perceptions of their use. The discussion highlights the evolving nature of warfare with the introduction of autonomous systems and the attendant ethical dilemmas, particularly regarding responsibility and accountability in combat scenarios.
2009
War robots clearly hold Iremendous advantages-from saving the lives of our own soldiers, to safely defusing roadside bombs, to operating in inaccessible and dangerous environments such as mountainside caves and underwater. Without emotions and other liabilities on the battlefield, they could conduct warfare more ethically and effectively than human soldiers who are susceptible to overreactions, anger, vengeance, fatigue, low morale, and so on. But the use of robots, especially autonomous ones, raises a host of ethical and risk issues. This paper offers a survey of such emerging issues in this new but rapidly advancing area of technology.
Pak. Journal of Int’L Affairs, Vol 5, Issue 1 , 2022
Artificial intelligence and technological advancements have headed to the development of robots capable of performing various functions. One of the purposes of robots is to replace human soldiers on battlefields. Killer robots, referred to as "autonomous weapon systems," pose a threat to the principles of human accountability that underpin the international criminal justice system and the current law of war that has arisen to support and enforce it. It poses a challenge to the Law of War's conceptual framework. In the worst-case scenario, it might encourage the development of weapons systems specifically to avoid liability for the conduct of the war by both the government and individuals. Furthermore, killer robots cannot comply with the fundamental law of war principles like the principle of responsibility. The accountability of autonomous
The Applied Ethics of Emerging Military and Security Technologies, 2016
The Changing Scope of Technoethics in Contemporary Society, 2018
Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this article, we critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) autonomous lethal military robots. We will start by looking at military commanders, as they are the ones with whom responsibility normally lies. We will argue that this is typically still the case when lethal robots kill wrongly – even if these robots act autonomously. Nonetheless, we will next look into the possible moral responsibility of the actors at the beginning and the end of the causal chain: those who design and manufacture armed military robots, and those who, far from the battlefield, remotely control them.
Ethics and Robotics, 2009
Killing with robots is no more a future scenario but became a reality in the first decade of the 21st century. The U.S. and Israel forces are using uninhabited combat aerial vehicles (UCAVs) in their so-called wars on terror, especially for targeted killing missions in Iraq, Pakistan, Afghanistan as well as in Lebanon and the Palestinian occupied territories (for example in Israel's recent war on Gaza). In the last years, the number of UCAV air attacks is rising significantly as well as the number of killed civilians. Nevertheless, the automation of warfare is envisioned by the US government and military for 2032 at the latest and military robots are increasingly used in civilian contexts. In the face of these developments, discussions on robotic warfare as well as security technology from a science and technology studies and technoethical perspective are highly needed. Important questions are how robotic warfare and security applications may find their way into society on a broad scale and whether this might lead to a new global arms race, violation of the international law of warfare, an increasing endangerment of civilians transporting racist and sexist implications, and the blurring of boundaries between military, police and civil society.
Journal of Military Ethics, 2015
The debate on and around “killer robots” has been firmly established at the crossroads of ethical, legal, political, strategic, and scientific discourses. Flourishing at the two opposite poles, with a few contributors caught in the middle, the polemic still falls short of a detailed, balanced, and systematic analysis. It is for these reasons that we focus on the nitty-gritties, multiple pros and cons, and implications of autonomous weapon systems (AWS) for the prospects of the international order. Moreover, a nuanced discussion needs to feature the considerations of their technological continuity vs. novelty. The analysis begins with properly delimiting the AWS category as fully autonomous (lethal) weapon systems, capable of operating without human control or supervision, including in dynamic and unstructured environments, and capable of engaging in independent (lethal) decision-making, targeting, and firing, including in an offensive manner. As its primary goal, the article aims to move the existing debate to the level of a first-order structure and offers its comprehensive operationalisation. We propose an original framework based on a thorough analysis of six specific dilemmas, and detailing the pro/con argument for each of those: (1) (un)predictability of AWS performance; (2) dehumanization of lethal decision-making; (3) depersonalisation of enemy (non-)combatant; (4) human-machine nexus in coordinated operations; (5) strategic considerations; (6) AWS operation in law(less) zone. What follows are concluding remarks. Keywords: autonomous weapon systems, killer robots, lethal decisionmaking, military ethics, artificial intelligence, security regulation, humanitarian law, revolution in military affairs, military strategy
The U.S. military has started to construct and deploy robotic weapons systems. Although human controllers may still be monitoring the functioning of the technology, the next logical step is to transfer incrementally more of the decision-making power to the robots themselves. Thus, this article seeks to examine the ethical implications of the creation and use of "autonomous weapons systems."
Dynamiques Internationales, N° 8, "La robotisation du champ de bataille : Enjeux et impact sur les relations internationales", 2013
There are currently strong debates over the use of robots on battlefields, mainly focusing on the morality of such systems and their potential ability to act ethically. Whilst a discussion on the matter is undoubtedly necessary, it seems that much time is spent discussing this issue, while the real concern when it comes to dealing with robots is neither their intrinsic morality, nor the possibility that they would become autonomous. The real concern is: who will be held responsible for the actions committed by such machines, especially if they are fitted with weapons?
In November of 2012, the Human Rights watch, in a collaboration with the International Human Rights Clinic of the Harvard Law School, published a report called “Losing Humanity: The Case against Killer Robots”. In this report the authors discuss why they find that the deployment of what they identify as “fully autonomous weapons” should be prohibited. They argue this on account of the inability of such weapons to abide to international humanitarian law. It is important to note, as the authors do, that these fully autonomous weapons as yet do not exist. However, given the current debates on the deployment of remote controlled drones in war areas and the similarity between these drones and the fully autonomous weapons (the similarity is as such that the drones would be a previous stage in the development of the autonomous weapons), it is important to review this report and analyse the grounds on which its arguments are made.
Royakkers, L., & Olsthoorn, P. (2014). Military Robots and the Question of Responsibility. International Journal of Technoethics (IJT), 5(1), 1-14.
Most unmanned systems used in operations today are unarmed and mainly used for reconnaissance and mine clearing, yet the increase of the number of armed military robots is undeniable. The use of these robots raises some serious ethical questions. For instance: who can be held morally responsible in reason when a military robot is involved in an act of violence that would normally be described as a war crime? In this article, The authors critically assess the attribution of responsibility with respect to the deployment of both non-autonomous and non-learning autonomous lethal military robots. The authors will start by looking at the role of those with whom responsibility normally lies, the commanders. The authors argue that this is no different in the case of the above mentioned robots. After that, we will turn to those at the beginning and the end of the causal chain, respectively the manufacturers and designers, and the human operators who remotely control armed military robots from behind a computer screen.
Central European Journal of International and Security Studies, 2021
The debate on and around "killer robots" has been firmly established at the crossroads of ethical, legal, political, strategic, and scientific discourses. Flourishing at the two opposite poles, with a few contributors caught in the middle, the polemic still falls short of a detailed, balanced, and systematic analysis. It is for these reasons that we focus on the nitty-gritties, multiple pros and cons, and implications of autonomous weapon systems (AWS) for the prospects of the international order. Moreover, a nuanced discussion needs to feature the considerations of their technological continuity vs. novelty. The analysis begins with properly delimiting the AWS category as fully autonomous (lethal) weapon systems, capable of operating without human control or supervision, including in dynamic and unstructured environments, and capable of engaging in independent (lethal) decision-making, targeting, and firing, including in an offensive manner. As its primary goal, the article aims to move the existing debate to the level of a first-order structure and offers its comprehensive operationalisation. We propose an original framework based on a thorough analysis of six specific dilemmas, and detailing the pro/con argument for each of those: (1) (un)predictability of AWS performance; (2) dehumanization of lethal decision-making; (3) depersonalisation of enemy (non-)combatant; (4) human-machine nexus in coordinated operations; (5) stra-
While there are many issues to be raised in using lethal autonomous robotic weapons, we argue that the most important question is: Should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability or operational questions of chain of command. We further argue that the answer must be "no" and offer three reasons for banning autonomous robots. 1) Such a robot treats a human as an object, instead of a person with inherent dignity. 2) A machine run by a program has no human emotions, no feelings about the seriousness of killing a human. 3) Using such a robot would be a violation of military honor. We therefore conclude that the use of an autonomous robot (not a remotely operated robot) in lethal operations should be banned for a first strike, but leave open the possibility of retaliatory or defensive use.
Fordham International Law Journal, 2021
This Article not only questions whether an embodied artificial intelligence ("EAI") could give an order to a human combatant, but controversially, examines whether it should also refuse one. A future EAI may be capable of refusing to follow an order, for example, where an order appeared to be manifestly unlawful, was otherwise in breach of International Humanitarian Law ("IHL"), national Rules of Engagement ("ROE") or, even, where they appeared to be immoral or unethical. Such an argument has traction in the strategic realm in terms of "system of systems"-the premise that more advanced technology can potentially help overcome Clausewitzian "friction" or "fog of war." An aircraft’s anti-stall mechanism, which takes over, and corrects human error, is seen as nothing less than “positive.” As part of opening this much-needed discussion, the Authors examine the legal parameters, and by way of a solution provide a framework for overriding and disobeying. Central to this discussion, are state specific ROEs within the concept of “duty to take precautions.” At present, the guidelines relating to a human combatant’s right to disobey orders are contained within such doctrine, but vary widely. For example, in the United States, a soldier may disobey an order but only when the act in question is clearly unlawful. In direct contrast, however, Germany’s “state practice” requires orders to be compatible with the much wider concept of human dignity, and to be of “use for service.” By way of a solution, the Authors propose the crafting of a test referred to as “robot rules of engagement” (“RROE”) with specific regard to the disobeying of orders. These RROE ensure (via a multi-stage verification process) that an EAI can discount human “traits” and minimize errors that lead to breaches of IHL. In the broader sense, the Authors question whether warfare should remain an utterly human preserve—where human error is an unintended but unfortunate consequence—or, whether the duty to take all feasible precautions in attack in fact require a human commander to utilize available AI systems to routinely question human decision-making, and where applicable, prevent mistakes. In short, the Article examines whether human error can be corrected and overridden, but for the better, rather than for the worse.
Mechanisms and Machine Science, 2025
This article investigates the growing use of robots and automation in military operations, emphasizing the ethical challenges posed to international humanitarian law. The Iraq War marked a key shift, transforming robots from tools viewed skeptically to vital military assets. By 2006, robots had executed over 30,000 missions, and demand for unmanned aerial vehicles (UAVs) surged. These technologies span military branches, including the navy's use of unmanned submarines. The focus is on Lethal Autonomous Weapon Systems (LAWS), which can independently make combat decisions. Nations like the U.S., China, and Russia are advancing LAWS, raising ethical concerns about autonomous warfare. The study aims to clarify issues surrounding LAWS, examine international arms control discourse, and propose regulatory strategies. Key areas of discussion include defining LAWS, reviewing debates under the Convention on Certain Conventional Weapons (CCW), addressing regulatory challenges, and suggesting regulation methods for dual-use technology weapons. The article stresses the need for preemptive arms control to limit LAWS development and anticipates future ethical and military landscapes shaped by these technologies. It calls for aligning future LAWS regulations with existing frameworks to manage their impact effectively.
2023
Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the existing ethical objections to LARs. Findings: My analysis shows the contemporary thought to be deficient in philosophical rigour, these deficiencies leading to an alternative thesis.
Ethics and information technology, 2010
Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled.
2014
Military robots are gradually entering the theater of war in many guises. As the capabilities of these robots move toward increased autonomous operation, a number of difficult ethical and legal issues must be considered, such as appropriate rules of engagement and even notions of robot ethics. In the distant future, as military "artificial beings" that draw on expected advances in cyborg and android technologies are developed, further issues of conscience, consciousness, personhood, and moral responsibility also arise.
2015
Abstract. While modern states may never cease to wage war against one another, they have recognized moral restrictions on how they conduct those wars. These “rules of war ” serve several important functions in regulating the organization and behavior of military forces, and shape political debates, negotiations, and public perception. While the world has become somewhat accustomed to the increasing technological sophistication of warfare, it now stands at the verge of a new kind of escalating technology–autonomous robotic soldiers–and with them new pressures to revise the rules of war to accommodate them. This paper will consider the fundamental issues of justice involved in the application of autonomous and semi-autonomous robots in warfare. It begins with a review of just war theory, as articulated by Michael Walzer [1], and considers how robots might fit into the general framework it provides. In so doing it considers how robots, “smart ” bombs, and other autonomous technologies ...
2007
This paper addresses a difficult issue confronting the designers of intelligent robotic systems: their potential use of lethality in warfare. As part of an ARO-funded study, we are currently investigating the points of view of various demographic groups, including researchers, regarding this issue, as well as developing methods to engineer ethical safeguards into their use in the battlefield.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.