AI Ethics in Autonomous Weapon Systems: A Moral Dilemma
Abstract
As AI technology advances, its application in military systems raises profound ethical
concerns. This paper examines the moral implications of deploying autonomous weapon
systems (AWS), arguing that meaningful human control must be preserved to ensure
accountability and minimize harm.
Introduction
The integration of artificial intelligence into military strategy has transformed the nature of
modern warfare. Autonomous weapon systems, capable of selecting and engaging targets
without human intervention, challenge traditional ethical frameworks. This research
evaluates current international law and philosophical theories to argue against fully
autonomous deployment of lethal systems.
Moral accountability in warfare hinges on the ability to assign responsibility for harm. If
machines make life-or-death decisions without human oversight, who bears moral and legal
responsibility for mistakes? Drawing from just war theory and principles of humanitarian
law, this paper contends that fully autonomous systems undermine ethical norms of
proportionality, distinction, and accountability.
Furthermore, the risk of algorithmic bias, hacking, or misidentification of targets poses a
grave threat to civilian populations. Therefore, this paper supports the proposition of
international treaties that mandate human oversight in all lethal force decisions, reinforcing
both ethical integrity and global security.