Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008
…
112 pages
1 file
The rise of autonomous military robotics presents numerous risks and ethical dilemmas, much like advancements in civilian robotics. The unpredictable nature of complex robotic systems poses challenges to establishing reliable ethical decision-making frameworks. This paper advocates a hybrid ethical approach, integrating virtue ethics into military robotics to enhance moral character in autonomous systems, while also suggesting a precautionary principle to guide research and development until risks are more thoroughly understood.
Paper for the 2011 conference on The Ethics of Emerging Military Technologies, organized by The International Society for Military Ethics, and hosted by the University of San Diego.
Chapman and Hall/CRC eBooks, 2024
Beyond The Horizon, 2021
The long-term impact of artificial intelligence (AI) will be unprecedented in the field of international security. There is already an ongoing fierce competition for global technological supremacy. Defense projects by the military powers seek to secure strategic advantages on the battlefield. Perceived rivalry only increases investment in research and development. However, despite concerns related to major conflicts between these powers, serious consequences will be felt sooner elsewhere in the Global South. We should then ask what the perception of developing countries is in the face of accelerated militarization of AI.
Digital war, 2024
Biases in artificial intelligence have been flagged in academic and policy literature for years. Autonomous weapons systems-defined as weapons that use sensors and algorithms to select, track, target, and engage targets without human intervention-have the potential to mirror systems of societal inequality which reproduce algorithmic bias. This article argues that the problem of engrained algorithmic bias poses a greater challenge to autonomous weapons systems developers than most other risks discussed in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS), which should be reflected in the outcome documents of these discussions. This is mainly because it takes longer to rectify a discriminatory algorithm than it does to issue an apology for a mistake that occurs occasionally. Highly militarised states have controlled both the discussions and their outcomes, which have focused on issues that are pertinent to them while ignoring what is existential for the rest of the world. Various calls from civil society, researchers, and smaller states for a legally binding instrument to regulate the development and use of autonomous weapons systems have always included the call for recognising algorithmic bias in autonomous weapons, which has not been reflected in discussion outcomes. This paper argues that any ethical framework developed for the regulation of autonomous weapons systems should, in detail, ensure that the development and use of autonomous weapons systems do not prejudice against vulnerable sections of (global) society.
Synthese , 2023
Responsibility gaps concern the attribution of blame for harms caused by autonomous machines. The worry has been that, because they are artificial agents, it is impossible to attribute blame, even though doing so would be appropriate given the harms they cause. We argue that there are no responsibility gaps. The harms can be blameless. And if they are not, the blame that is appropriate is indirect and can be attributed to designers, engineers, software developers, manufacturers or regulators. The real problem lies elsewhere: autonomous machines should be built so as to exhibit a level of risk that is morally acceptable. If they fall short of this standard, they exhibit what we call 'a control gap.' The causal control that autonomous machines have will then fall short of the guidance control they should emulate.
2009
War robots clearly hold Iremendous advantages-from saving the lives of our own soldiers, to safely defusing roadside bombs, to operating in inaccessible and dangerous environments such as mountainside caves and underwater. Without emotions and other liabilities on the battlefield, they could conduct warfare more ethically and effectively than human soldiers who are susceptible to overreactions, anger, vengeance, fatigue, low morale, and so on. But the use of robots, especially autonomous ones, raises a host of ethical and risk issues. This paper offers a survey of such emerging issues in this new but rapidly advancing area of technology.
HAL (Le Centre pour la Communication Scientifique Directe), 2018
Journal of Experimental & Theoretical Artificial Intelligence, 2014
Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in antisocial and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the 'Safe-AI Scaffolding Strategy' for creating powerful safe systems with a high confidence of safety at each stage of development.
Stanford Law and Policy Review, 25, 2014
In this Article, I review the military and security uses of robotics and "un-manned" or "uninhabited" (and sometimes "remotely piloted") vehicles in a number of relevant conflict environments that, in turn, raise issues of law and ethics that bear significantly on both foreign and domestic policy initiatives. My treatment applies to the use of autonomous unmanned platforms in combat and low-intensity international conflict, but also offers guidance for the increased domestic uses of both remotely controlled and fully autonomous unmanned aerial , maritime, and ground systems for immigration control, border surveillance, drug interdiction, and domestic law enforcement. I outline the emerging debate concerning "robot morality" and computational models of moral cognition and examine the implications of this debate for the future reliability, safety, and effectiveness of autonomous systems (whether weaponized or unarmed) that might come to be deployed in both domestic and international conflict situations. Likewise , I discuss attempts by the International Committee on Robot Arms Control (ICRAC) to outlaw or ban the use of autonomous systems that are lethally armed, as well an alternative proposal by the eminent Yale University ethicist, Wendell Wallach, to have lethally armed autonomous systems that might be capable of making targeting decisions independent of any human oversight specifically designated "mala in se" under international law. Following the approach of Marchant, et al., however, I summarize the lessons learned and the areas of provisional consensus reached thus far in this debate in the form of "soft-law" precepts that reflect emergent norms and a growing international consensus regarding the proper use and governance of such weapons.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Intelligent Systems, 2014
Chapman and Hall/CRC eBooks, 2024
Central European Journal of International and Security Studies, 2021
Royakkers, L., & Olsthoorn, P. (2014). Military Robots and the Question of Responsibility. International Journal of Technoethics (IJT), 5(1), 1-14.
Digitization and Challenges to Democracy Globalization Working Papers for Fall 2019 , 2019
Fordham International Law Journal, 2021
Revista Academiei Forţelor Terestre, 2022
The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives,, 2019
Obrana a strategie, 2023
Pak. Journal of Int’L Affairs, Vol 5, Issue 1 , 2022