Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, Minds and Machines
https://doi.org/10.1007/s11023-020-09525-8…
24 pages
1 file
This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artifcial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.
In review, 2019
The field of "machine ethics" has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots under the heading of "artificial moral agents" (AMA). In this article, we work through philosophical assumptions, conceptual and logical variations, and identify methodological as well as conceptual problems based on an analysis of all sides in this debate. A main conclusion is that many of these problems can be better handled by moving the discussion into a more outright normative ethical territory. Rather than locking the issue to be about AMA, a more fruitfull way forward is to address to what extent both machines and human beings should be in different ways included in different (sometimes shared) practices of ascribing moral agency and responsibility.
AI & Ethics, 2024
The meanings of the concepts of moral agency in application to AI technologies differ vastly from the ones we use for human agents. Minimal definitions of AI moral agency are often connected with other normative agency-related concepts, such as rationality or intelligence, autonomy, or responsibility. This paper discusses the problematic application of minimal concepts of moral agency to AI. I explore why any comprehensive account of AI moral agency has to consider the interconnections to other normative agency-related concepts and beware of four basic detrimental mistakes in the current debate. The results of the analysis are: (1) speaking about AI agency may lead to serious demarcation problems and confusing assumptions about the abilities and prospects of AI technologies; (2) the talk of AI moral agency is based on confusing assumptions and turns out to be senseless in the current prevalent versions. As one possible solution, I propose to replace the concept of AI agency with the concept of AI automated performance (AIAP).
2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 2021
Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named 'augmented intelligence'. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged 'Cybernetics' with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an 'agency' in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over 'the problem of ethical agency in AI'. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate 'the problem of moral agency in AI' from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.
… of research on technoethics. New York: IGI …, 2009
This chapter will argue that artificial agents created or synthesized by technologies such as artificial life (ALife), artificial intelligence (AI), and in robotics present unique challenges to the traditional notion of moral agency and that any successful technoethics must seriously consider that these artificial agents may indeed be artificial moral agents (AMA), worthy of moral concern. This purpose will be realized by briefly describing a taxonomy of the artificial agents that these technologies are capable of producing. I will then describe how these artificial entities conflict with our standard notions of moral agency. I argue that traditional notions of moral agency are too strict even in the case of recognizably human agents and then expand the notion of moral agency such that it can sensibly include artificial agents.
AI Ethics, 2020
Humans should never relinquish moral agency to machines, and machines should be 'aligned' with human values; but we also need to consider how broad assumptions about our moral capacities and the capabilities of AI, impact on how we think about AI and ethics. Consideration of certain approaches, such as the idea that we might programme our ethics into machines, may rest upon a tacit assumption of our own moral progress. Here I consider how broad assumptions about morality act to suggest certain approaches in addressing the ethics of AI. Work in the ethics of AI would benefit from closer attention not just to what our moral judgements should be, but also to how we deliberate and act morally: the process of moral decisionmaking. We must guard against any erosion of our moral agency and responsibilities. Attention to the differences between humans and machines, alongside attention to ways in which humans fail ethically, could be useful in spotting specific, if limited, ways that AI assist us to advance our moral agency.
Ethics and Information Technology, 2018
The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take into consideration the work of Bostrom and Dietrich, who have radically assumed this viewpoint and thoroughly explored its implications. Thirdly, I present an alternative approach to AMAs—the Discontinuity Approach—which underscores an essential difference between human moral agents and AMAs by tackling the matter from another angle. In this section I concentrate on the work of Johnson and Bryson and I highlight the link between their claims and Heidegger’s and Jonas’s sug-gestions concerning the relationship between human beings and technological products. In conclusion I argue that, although the Continuity Approach turns out to be a necessary postulate to the machine ethics project, the Discontinuity Approach highlights a relevant distinction between AMAs and human moral agents. On this account, the Discontinuity Approach generates a clearer understanding of what AMAs are, of how we should face the moral issues they pose, and, finally, of the difference that separates machine ethics from moral philosophy.
Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe-"rational" and "free"-while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. Wallach pushes for redoubled efforts toward a comprehensive account of ethics to guide machine ethicists on the issue of artificial moral agency. Options thus present themselves: reinterpret traditional ethics in a way that affords a comprehensive account of moral agency inclusive of both artificial and natural agents, or give up on the possibility and "muddle through" regardless. This series of papers pursues the first option, meets Tonkens' "challenge" and pursues Wallach's ends through Beavers' proposed means, by "landscaping" traditional moral theory in resolution of a comprehensive account of moral agency. This first paper establishes the challenge and sets out the tradition in terms of which an adequate solution should be assessed. The next paper in this series responds to the challenge in Kantian terms, and shows that a Kantian AMA is not only a possibility for Machine ethics research, but a necessary one.
Machine ethics are quickly becoming an important part of artificial intelligence research. We argue that attempts to attribute moral agency to intelligent machines are misguided, whether applied to infrahuman or superhuman AIs. Humanity should not put its future in the hands of the machines that do not do exactly what we want them to, since we will not be able to take power back. In general, a machine should never be in a position to make any non-trivial ethical or moral judgments concerning people unless we are confident, preferably with mathematical certainty, that these judgments are what we truly consider ethical.
Springer, 2019
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins’ (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique.
2020
Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) agent’ and ‘(moral) agency’ are exclusively related to human agents. Initially, the division between symbolic and sub-symbolic AI, the black box character of (deep) machine learning, and the complex relationship network in the provision and application of machine learning are outlined. Next, the ontological and action-theoretical basic assumptions of an ‘agency’ attribution regarding both ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Humanities and Social Sciences Communications
AI & Society, 2009
Beyond AI: Interdisciplinary Aspects of Artificial Intelligence
Science and Engineering Ethics
International Journal of Machine Consciousness, 2013
New Ideas in Psychology, 2019
Ethics and Information Technology, 2008
Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, 2021
Connection Science
Frontiers in Robotics and AI
The Southern Journal of Philosophy, 2022