Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Projects such as: the Future Combat System, a set of fourteen systems interconnected by an extensive robust electronic attack communication network that would allow commanders to have a real-time view of the battlefield with immediate decisions, together with the interconnection system. Multiple warheads that autonomously decide route changes and targets when another missile simultaneously launched is destroyed. The Maven project, also known as Algorithmic Warfare Cross functional Team, aims to bring to the military various technologies already developed in civilian data collection and processing in order to raise new capabilities for the artificial intelligence industry. Thus there is a redefinition of Autonomous Systems, - Systems that can change their behaviour in response to unanticipated events during their operation. That is, they are systems that, when interacting with their domain, are able to learn and make decisions without external agents dictating the command or having little influence.
2008
Autonomous weapon (AW) systems are a new and rapidly developing branch of warfare industry. However, autonomous weapons are not devices that belong strictly to the XIX century, in fact some authors date the birth of first autonomous weapons back to 1920's[1]. But the delicate matter of the definition of such weapons brings questions whether it is not the case that todays machines are really autonomous while yesterdays were just enhanced weaponry, preset to react to a certain, small number of input conditions. Also, should the definition state what humans would like the AW systems to be, or what they really are[2]? Moreover, does the enhanced capabilities of such systems change the way humans should treat the actions of such machines? Do they pose a threat to humans like a kitchen knife, that has to be misused by a person to cause harm — or like an enemy soldier, that nevertheless has to take responsibility for his actions? This paper tries to establish a definition of AW systems...
Encyclopedia of Public Administration and Public Policy, Third Edition, 2015
The prospect for weapons systems to become imbued with artificial autonomy is being realized by the accelerating advances in the fields of computer science and robotics today. Rather than merely marking another incremental stage in the development of lethal technologies, the incorporation of autonomy into weapons systems potentially heralds a new era of armed conflict replete with its own set of challenges and possibilities.
The human experience of warfare is changing with the introduction of AI in the field of advance weapon technology. Particularly, in the last five years, autonomous weapon system (AWS) has generated intense debate globally over the potential benefit and potential problems associated with these systems. Military planners understand that AWS can perform the most difficult and complex tasks, with or without human interference and, therefore, can significantly reduce military casualties and save costs. These systems act as force multipliers to counter security threats. On the other hand, political pundits and public intellectuals opine that AWS, without human control, can lead to highly problematic ethical and legal consequences, and some even claiming that it is unethical to allow machines control the life and death of a human being. Several prominent public intellectuals, including influential figures like Elon Musk and Apple co-founder Steve Wozniak called for banning of "offensive autonomous weapons beyond meaningful human control". But on the contrary, the militaries believe that the AWS can perform better without human control and follow legal and ethical rules better than soldiers. The debate over the AWS is a continuous one. This chapter will look into the emergence of AWS, its future potential and how it will impact future war scenarios, focussing thereby on the debate over the ethical-legal use of AWS and the viewpoints of military planners. Keywords Autonomous weapon systems • Military operations • Legal and ethical issues and meaningful human control
Encyclopaedia of Public Administration and Public Policy (EPAP), Third Edition. Melvin Dubnick and Dominic Bear field (Eds.), 2015
The prospect for weapons systems to become imbued with artificial autonomy is being realized by the accelerating advances in the fields of computer science and robotics today. Rather than merely marking another incremental stage in the development of lethal technologies, the incorporation of autonomy into weapons systems potentially herald a new era of armed conflict replete with its own set of challenges and possibilities.
Challenges to the deployment of autonomous weapons
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be capable of selecting targets and delivering lethality without any human interaction. This weapon technology may still be in its infancy, but both semi-autonomous and other pre-cursor systems are already in service. There are several drivers to a move from merely automatic weapons to fully autonomous weapons which are able to engage a target based solely upon algorithm-based decision-making. This requires material step-change in both hardware and software and, once deployed, posits a significant change in how humans wage war. But complex technical difficulties must first be overcome if this new independent and self-learning weapon category can legally be deployed on the battlefield. AWS also pose basic statutory, moral and ethical challenges. This thesis identifies the manifest complexity involved in fielding a weapon that can operate without human oversight while still retaining value as a battlefield asset. Its key research question therefore concerns the practical and technical feasibility of removing supervision from lethal engagements. The subject’s importance is that several well-tried concepts that have long comprised battlecraft may no longer be fit for purpose. In particular, legal and other obstacles challenge such weapons remaining compliant under Laws of Armed Conflict. Technical challenges, moreover, include the setting of weapon values and goals, the anchoring of the weapon’s internal representations as well as management of its utility functions, its learning functions and other key operational routines. While the recent development pace in these technologies may appear extraordinary, fundamental fault lines endure. The thesis also notes the inter-dependent and highly coupled nature of the routines that are envisaged for AWS operation, in particular ramifications arising from its machine learning spine, in order to demonstrate how detrimental are these compromises to AWS deployment models. In highlighting AWS deployment challenges, the analysis draws on broad primary and secondary sources to conclude that Meaningful Human Control (MHC) should be a statutory requirement in all violent engagements.
This volume covers the subject of autonomous systems, which stand potentially to transform the way in which warfare is conducted. Advances in sensors, robotics and computing are permitting the development of a whole new class of systems, which offer a wide range of military benefits including the ability to operate without personnel on board the platform, novel human-machine teaming concepts, and “swarm” operating methods. With any such transformation advances, there are unique operational, legal, ethical and design issues. In a unique multidisciplinary contribution, this volume considers the opportunities, implications and challenges of increasing autonomy in defence systems.
Waging warfare and developing technologies are two distinctively human activities that have long been closely intertwined. Emerging technologies have continually shaped military strategy and policy, from the invention of Greek fire, to gunpowder, artillery, nuclear weapons, and GPS- and laser-guided munitions. The diligent student of military history is also a keen observer of technological change. Once again, a new technology on the horizon promise to drastically alter the texture and norms of combat: lethal autonomous weapons. Remarkable advances in artificial intelligence, machine learning, and computer vision lend an urgent credibility to the suggestion that reliable autonomous weapons are possible.
BTSym 2018, 2018
Artificial Intelligence, especially deep learning, is a path of no return. It is necessary, however, to reflect on the risks of this new technology. The article aims, therefore, to analyze some of these risks. For that, the most extreme situation was chosen: autonomous weapons, capable of thinking and deciding for themselves who, when and how to kill a human being .
Autonomy in Weapon Systems. The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy A Report by Daniele Amoroso, Frank Sauer, Noel Sharkey, Lucy Suchman and Guglielmo Tamburrini. Volume 49 of the Publication Series on Democracy Edited by the Heinrich Böll Foundation. ISBN 978-3-86928-173-5 Published under the following Creative Commons License: http://creativecommons.org/licenses/by-nc-nd/3.0
2020
Military robots have been used in war in some primitive form since the beginning of the 20th century. Only recently they have moved from a marginal role to the center stage of contemporary military and intelligence operations. The most technologically advanced armed forces have invested heavily into the development of unmanned vehicles and robots of all kinds, ranging from the now very familiar Predator drones to IED robots, heavy unmanned ground vehicles, unmanned maritime vehicles, automated sentry guns, micro-robots, malicious software bots for sophisticated cyber attacks, and even nano-bots. The common denominator for all these new types of weapons is that they are becoming more and more automated and ultimately autonomous. Mainly academics and peace activists have recently voiced strong concerns over the prospect of ‘killer robots’ roaming the earth and indiscriminately going after human prey. Although some of these concerns are legitimate, they are nevertheless strongly influe...
Military weapons and equipment benefit a great deal from advances in technology. These advances have allowed combatants to engage targets more accurately and from further away, which has meant increased safety for military forces and civilians and an improved capability to accomplish assigned tasks. The latest technology to improve the capability of military systems is autonomy. Military systems are gaining an increased capability to perform assigned tasks autonomously, and this capability will only improve over time. Autonomy in military systems allows human operators to remain out of harm’s way and to complete tasks that manned systems cannot (or to complete these tasks more efficiently. If autonomous capability achieves the potential that many of its supporters believe it can, this will radically change how military operations are conducted. Because these changes may be so significant, and because they allow systems to perform tasks without being constrained or guided by constant human oversight, autonomous technology needs to be examined from an ethical perspective. This is especially necessary because autonomous capability can have both negative and positive effects on human life. While many articles and chapters focus exclusively on either the negative or positive implication of autonomous systems, this chapter considers both and offers recommendations for advancing policy in a manner that takes into account ethical considerations for the development and use of autonomous systems.
2020
In this chapter we discuss the controversial topic of autonomous weapons systems. We define terms, present elements and perspectives related to the debate surrounding autonomous weapons systems (AWS) and present the main arguments for and against the use of such systems.
The chapter focuses on how the use of AWS affects the conduct of the war in tactical, operational, and strategic terms. The purpose is to identify and properly frame the implications of such weapons systems not only on the battlefield but also about broader issues like the arms race, deterrence, and stability in the international order. In terms of structure, the chapter first tackles the definitional debate and approaches the types of autonomy in modern weapons systems. A common understanding of AWS is still lacking in both the scientific and policy circles. The following two sections review the advantages and challenges that such weapons systems pose to military practitioners and policymakers. Building on the above, the final section discusses the potential impact of AWS on the international system and stresses the likelihood of unintended escalation and the need for arms control or international regulations in the development and deployment of AWS. The chapter ends with the conclusions that summarize the main points discussed and highlight the need for policy development in this area. The research is based on a thorough literature review of scholarly articles, books, reports, and relevant publications that explore the strategic implications of autonomous weapons. Additionally, real-world examples and case studies where autonomous weapons have been deployed or tested are used to support the arguments and provide practical insights.
Penn State Journal of Law and International Affairs, 2020
Stanford Law and Policy Review, 25, 2014
In this Article, I review the military and security uses of robotics and "un-manned" or "uninhabited" (and sometimes "remotely piloted") vehicles in a number of relevant conflict environments that, in turn, raise issues of law and ethics that bear significantly on both foreign and domestic policy initiatives. My treatment applies to the use of autonomous unmanned platforms in combat and low-intensity international conflict, but also offers guidance for the increased domestic uses of both remotely controlled and fully autonomous unmanned aerial , maritime, and ground systems for immigration control, border surveillance, drug interdiction, and domestic law enforcement. I outline the emerging debate concerning "robot morality" and computational models of moral cognition and examine the implications of this debate for the future reliability, safety, and effectiveness of autonomous systems (whether weaponized or unarmed) that might come to be deployed in both domestic and international conflict situations. Likewise , I discuss attempts by the International Committee on Robot Arms Control (ICRAC) to outlaw or ban the use of autonomous systems that are lethally armed, as well an alternative proposal by the eminent Yale University ethicist, Wendell Wallach, to have lethally armed autonomous systems that might be capable of making targeting decisions independent of any human oversight specifically designated "mala in se" under international law. Following the approach of Marchant, et al., however, I summarize the lessons learned and the areas of provisional consensus reached thus far in this debate in the form of "soft-law" precepts that reflect emergent norms and a growing international consensus regarding the proper use and governance of such weapons.
Whilst fully Automated Weapon Systems (AWS) are not currently available in armed conflicts , there has already been a lot of discussion of the legality of AWS and how International Humanitarian Law (IHL) applies to the creation and use of such weapons as a means of waging war. This essay will first discuss the background to AWS setting out the relevant definitions and scope for this paper. It will then assess the relevant principles, treaty and customary law from IHL that would apply to AWS if they were used to achieve military objectives.
CSI Review , 2021
Comprehending and analysing Artificial Intelligence (AI) is fundamental to embrace the next challenges of the future, specifically for the defence sector. Developments in this sector will involve both arms and operations. The debate is linked to the risks that automation could bring into the battlefield, specifically for the Lethal Autonomous Weapons Systems (LAWS). While AI could bring many advantages in risk detection, protection and preparation capabilities, it may bring also several risks on the battlefield and break the basic principles of International Law. Indeed, having the human operator "out of the loop", could lead to unprecedented challenges and issues. Such weapons may also strengthen terroristic groups, allowing them to plan mass attacks or specific assassinations with no human sacrifice. The article, divided into three parts, aims to analyse the LAWS and its related issue. The first one introduces the LAWS and is applications worldwide. The second one summarizes the problems concerning International Humanitarian Law. Eventually, the last part is focused on the research for a proper regulation and the EU position on the topic.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.