Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2024, AI & Ethics
https://doi.org/10.1007/s43681-024-00566-8…
18 pages
1 file
The meanings of the concepts of moral agency in application to AI technologies differ vastly from the ones we use for human agents. Minimal definitions of AI moral agency are often connected with other normative agency-related concepts, such as rationality or intelligence, autonomy, or responsibility. This paper discusses the problematic application of minimal concepts of moral agency to AI. I explore why any comprehensive account of AI moral agency has to consider the interconnections to other normative agency-related concepts and beware of four basic detrimental mistakes in the current debate. The results of the analysis are: (1) speaking about AI agency may lead to serious demarcation problems and confusing assumptions about the abilities and prospects of AI technologies; (2) the talk of AI moral agency is based on confusing assumptions and turns out to be senseless in the current prevalent versions. As one possible solution, I propose to replace the concept of AI agency with the concept of AI automated performance (AIAP).
2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 2021
Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named 'augmented intelligence'. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged 'Cybernetics' with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an 'agency' in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over 'the problem of ethical agency in AI'. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate 'the problem of moral agency in AI' from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.
In review, 2019
The field of "machine ethics" has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots under the heading of "artificial moral agents" (AMA). In this article, we work through philosophical assumptions, conceptual and logical variations, and identify methodological as well as conceptual problems based on an analysis of all sides in this debate. A main conclusion is that many of these problems can be better handled by moving the discussion into a more outright normative ethical territory. Rather than locking the issue to be about AMA, a more fruitfull way forward is to address to what extent both machines and human beings should be in different ways included in different (sometimes shared) practices of ascribing moral agency and responsibility.
Minds and Machines, 2020
This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artifcial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.
2020
Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) agent’ and ‘(moral) agency’ are exclusively related to human agents. Initially, the division between symbolic and sub-symbolic AI, the black box character of (deep) machine learning, and the complex relationship network in the provision and application of machine learning are outlined. Next, the ontological and action-theoretical basic assumptions of an ‘agency’ attribution regarding both ...
Law and Business
AIs’ presence in and influence on human life is growing. AIs are seen more and more as autonomously acting agents, which creates a challenge to build ethics into their design. This paper defends the thesis that we need to equip AI with artificial conscience to make them capable of wise judgements. An argument is built in three steps. First, the concept of decision is presented, and second, the Asilomar Principles for AI development are analysed. It is then shown that to meet those principles AI needs the capability of passing moral judgements on right and wrong, of following that judgement, and of passing a meta-judgement on the correctness of a given moral judgement, which is a role of conscience. In classical philosophy, the ability to discover right and wrong and to stick to one's judgement of what is right action in given circumstances is called practical wisdom. The conclusion is that we should equip AI with artificial wisdom. Some problems stemming from ascribing moral age...
AI and Ethics
Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial intelligence? Both questions presuppose that the term ‘responsibility’ is a good tool for analysing the moral issues surrounding artificial intelligence. This article will draw this presupposition into doubt and show how reference to responsibility obscures the complexity of moral situations and moral agency, which can be analysed with a more differentiated toolset of moral terminology. It suggests that the impression of responsibility gaps only occurs if we gloss over the complexity of the moral situation in which artificial intelligent tools are employed and if—counterfactually—we ascribe them some kind of pseudo-agential status.
Machine ethics are quickly becoming an important part of artificial intelligence research. We argue that attempts to attribute moral agency to intelligent machines are misguided, whether applied to infrahuman or superhuman AIs. Humanity should not put its future in the hands of the machines that do not do exactly what we want them to, since we will not be able to take power back. In general, a machine should never be in a position to make any non-trivial ethical or moral judgments concerning people unless we are confident, preferably with mathematical certainty, that these judgments are what we truly consider ethical.
New Ideas in Psychology, 2019
The paper addresses the question whether artificial intelligences can be moral agents. We begin by observing that philosophical accounts of moral agency, in particular Kantianism and utilitarianism, are very abstract theoretical constructions: no human being can ever be a Kantian or a utilitarian moral agent. Ironically, it is easier for a machine to approximate this idealised type of agency than it is for homo sapiens. We then proceed to outline the structure of human moral practices. Against this background, we identify two conditions of moral agency: internal and external. We argue further that the existing AI architectures are unable to meet the two conditions. In consequence, machines-at least at the current stage of their development-cannot be considered moral agents.
AI Ethics, 2020
Humans should never relinquish moral agency to machines, and machines should be 'aligned' with human values; but we also need to consider how broad assumptions about our moral capacities and the capabilities of AI, impact on how we think about AI and ethics. Consideration of certain approaches, such as the idea that we might programme our ethics into machines, may rest upon a tacit assumption of our own moral progress. Here I consider how broad assumptions about morality act to suggest certain approaches in addressing the ethics of AI. Work in the ethics of AI would benefit from closer attention not just to what our moral judgements should be, but also to how we deliberate and act morally: the process of moral decisionmaking. We must guard against any erosion of our moral agency and responsibilities. Attention to the differences between humans and machines, alongside attention to ways in which humans fail ethically, could be useful in spotting specific, if limited, ways that AI assist us to advance our moral agency.
The question of whether AI can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies are constantly constructing our artefacts, including our ethical systems. Conse- quently, the place of AI in society requires norma- tive, not descriptive reasoning. Here I review the basis of social and ethical behaviour, then propose a definition of morality that facilitates the consid- eration of AI moral subjectivity. I argue that we are unlikely to construct a coherent ethics such that it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
… of research on technoethics. New York: IGI …, 2009
Beyond AI: Interdisciplinary Aspects of Artificial Intelligence
Humanities and Social Sciences Communications
Advances in Robotics & Mechanical Engineering, 2019
Philosophy & Technology
Springer, 2019
Discover Artificial Intelligence, 2023
Ethics and Information Technology, 2018
Artificial Intelligence, A Protocol for Setting Moral and Ethical Operational Standars, 2019
International Journal of Machine Consciousness, 2013
Journal of Intelligence Studies in Business