Academia.eduAcademia.edu

A Dilemma for Moral Deliberation in AI

Abstract

Many social trends are conspiring to drive the adoption of greater automation in society. The economic benefits of automation have motivated a dramatic transition to automated manufacturing for several decades. As we project these trends just a few years into the future, it is undeniable that we will see a greater offloading of human decisionmaking to robots. Many of these decisions are morally salient: for example, decisions about how benefits and burdens are distributed and weighed against each other, whether your autonomous car decides to brake or swerve, or whether to engage an enemy combatant on the battlefield. We suggest that the question of AI consciousness poses a dilemma. If we want robots to abide by either consequentialist or deontological theories, whether artificially intelligent agents will be conscious or not, we will face serious, and perhaps insurmountable difficulties.