Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
34 pages
1 file
Many social trends are conspiring to drive the adoption of greater automation in society. The economic benefits of automation have motivated a dramatic transition to automated manufacturing for several decades. As we project these trends just a few years into the future, it is undeniable that we will see a greater offloading of human decisionmaking to robots. Many of these decisions are morally salient: for example, decisions about how benefits and burdens are distributed and weighed against each other, whether your autonomous car decides to brake or swerve, or whether to engage an enemy combatant on the battlefield. We suggest that the question of AI consciousness poses a dilemma. If we want robots to abide by either consequentialist or deontological theories, whether artificially intelligent agents will be conscious or not, we will face serious, and perhaps insurmountable difficulties.
International Journal of Machine Consciousness, 2011
What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.
Ethical Theory and Moral Practice
Modern weapons of war have undergone precipitous technological change over the past generation and the future portends even greater advances. Of particular interest are so- called ‘autonomous weapon systems’ (henceforth, AWS), that will someday purportedly have the ability to make life and death targeting decisions ‘on their own.’ Despite the strong and widespread sentiments against such weapons, however, proffered philosophical arguments against AWS are often found lacking in substance. We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e. it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character. Robots cannot in principle possess these abilities, so robots cannot in principle replicate human moral judgment. If robots cannot in principle replicate human moral judgment then it is morally problematic to deploy AWS with that aim in mind. Second, we then argue that even if it is possible for a sufficiently sophisticated robot to make ‘moral decisions’ that are extensionally indistinguishable from (or better than) human moral decisions, these ‘decisions’ could not be made for the right reasons. This means that the ‘moral decisions’ made by AWS are bound to be morally deficient in at least one respect even if they are extensionally indistinguishable from human ones.
Law and Business
AIs’ presence in and influence on human life is growing. AIs are seen more and more as autonomously acting agents, which creates a challenge to build ethics into their design. This paper defends the thesis that we need to equip AI with artificial conscience to make them capable of wise judgements. An argument is built in three steps. First, the concept of decision is presented, and second, the Asilomar Principles for AI development are analysed. It is then shown that to meet those principles AI needs the capability of passing moral judgements on right and wrong, of following that judgement, and of passing a meta-judgement on the correctness of a given moral judgement, which is a role of conscience. In classical philosophy, the ability to discover right and wrong and to stick to one's judgement of what is right action in given circumstances is called practical wisdom. The conclusion is that we should equip AI with artificial wisdom. Some problems stemming from ascribing moral age...
2016
In this paper I will explore the ethical implications and dilemmas that will arise with the creation of non-biological generally intelligent systems. The manner in which we will be obligated to treat AI ultimately rests on the specific morally relevant properties such entities posses. I will argue that if AI have phenomenal consciousness, they will necessarily hold morally relevant interests that must be accounted for and weighed against those of other sentient species. Using a Utilitarian moral framework I will advance the somewhat counterintuitive claim that ceteris paribus, the interests of conscious artificial superintelligent systems, should be prioritized over those of humanity.
2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 2021
Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named 'augmented intelligence'. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged 'Cybernetics' with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an 'agency' in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over 'the problem of ethical agency in AI'. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate 'the problem of moral agency in AI' from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.
Minds and Machines, 2017
Machine ethics are quickly becoming an important part of artificial intelligence research. We argue that attempts to attribute moral agency to intelligent machines are misguided, whether applied to infrahuman or superhuman AIs. Humanity should not put its future in the hands of the machines that do not do exactly what we want them to, since we will not be able to take power back. In general, a machine should never be in a position to make any non-trivial ethical or moral judgments concerning people unless we are confident, preferably with mathematical certainty, that these judgments are what we truly consider ethical.
2012
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question"--consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses a fundamental challenge to moral thinking, questioning the traditional philosophical conceptualization of technology as a tool or instrument to be used by human agents. Gunkel begins by addressing the question of machine moral agency: whether a machine might be considered a legitimate moral agent that could be held responsible for decisions and actions. He then approaches the machine question from the other side, considering whether a machine might be a moral patient due legitimate moral consideration. Finally, Gunkel considers some recent innovations in moral philosophy and critical theory that complicate the machine question, deconstructing the binary agent–patient opposition itself. Technological advances may prompt us to wonder if the science fiction of computers and robots whose actions affect their human companions (think of HAL in 2001: A Space Odyssey) could become science fact. Gunkel's argument promises to influence future considerations of ethics, ourselves, and the other entities who inhabit this world.
SSRN Electronic Journal, 2018
This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable subset of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities. JOHN TASIOULAS 50 significant real-world presence when General Motors installed a robot, 'Unimate', in one of its plants to carry out manual tasks-such as welding and spraying-that were deemed too hazardous for human workers. 1 Today, robots are so commonplace in manufacturing that they are a major cause of unemployment in that sector. 2 But the use of robots in factories is only the beginning of a 'robot revolution'-itself part of wider developments powered by the science of Artificial Intelligence (AI)-that has had, or promises to have, transformative effects on all aspects of our lives. Robots are now being used, or being developed for use, in a vast array of settings. Driverless cars have already been invented and are expected to appear on our roads within a decade. These cars have the potential to reduce traffic accidents, which currently claim more than a million lives each year worldwide, by up to 90%, while also reducing pollution and traffic congestion (Bonnefon, Shariff, Rhawan 2006). Robots are also used to perform domestic chores, including vacuuming, ironing, and walking pets. In medicine and social care, robots surpass doctors in diagnosing certain forms of cancer or performing surgery, and they are used in therapy for children with autism or in the care of the elderly. Tutor robots already exist, as do social robots that provide companionship, or even sex. In the business world, AI figures heavily in the stock market, where computers make most decisions automatically, and in the insurance and mortgage industries. Even the recruitment of human workers is turning into a largely automated process, with many rejected job applications never being scrutinized by human eyes. AI-based technology, some of it robotic, also plays a role in the criminal justice system, assisting in policing and decisions on bail, sentencing, and parole. The development of autonomous weapons systems (AWSs), which select and attack military targets without human intervention, promises a new era in military defence. And this is just a sample of recent developments. In this article, I examine some of the key ethical questions posed by robots and AI (or RAIs, as I shall refer to them). The overall challenge, of course, is to harness the benefits of RAIs while responding adequately to the risks we incur in doing so. The need to balance benefit and risk is a recurrent one in the history of technological advance, but RAIs present it in a new and potentially sweeping form with largescale implications for how we live among others-in relation to work, care, education, play, friendship, love-and even regarding how we understand what it is to be a
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
AI Ethics, 2020
The Southern Journal of Philosophy, 2022
AI & Society , 2022
Science and engineering ethics, 2016
IEEE Security & Privacy, 2018
In review, 2019
Journal of Ethics and Legal Technologies, 2020