Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2024, Studia Ecologiae et Bioethicae
https://doi.org/10.21697/seb.5814…
12 pages
1 file
There is a widely debated issue regarding the status and impact of exponentially growing artificial intelligence. The article deals with the problem of the moral agency of animals, and artificial intelligence. The author addresses several criteria for moral agents and tries to find the answer to the question of whether we can treat animals and AI as moral agents. The author uses mostly method of philosophical analysis and comparative method. The author claims that moral agency is not a necessary condition for moral status and doubts the practicality of attributing full moral agency to animals and AI. Moreover, claims that moral agency comes in degrees and different kinds and therefore we have to consider the complex nature of moral agency when dealing with moral actions. For instance, even human moral agents are not all on the same level of development as suggested not just by empirical evidence but also virtue ethics.
2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 2021
Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named 'augmented intelligence'. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged 'Cybernetics' with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an 'agency' in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over 'the problem of ethical agency in AI'. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate 'the problem of moral agency in AI' from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.
Beyond AI: Interdisciplinary Aspects of Artificial Intelligence
"The aim of the paper is to argue in favor of the view on moral agency of artificial agents. According to that, the status of actual moral agents cannot be given to artificial agents neither today, nor in the future. The main message of the paper is, that moral agency is something specific only for human agents and the status of actual moral agents cannot be given to any other beings, including artificial ones. Argumentation is supported by few briefly outlined discussions about important topics such as: motivation vs. action; original vs. derivative purposes; in/ability to feel shame, guilt, respect as culturally determined; agents’ mutual recognition; understanding of motivation and behavior of another agent and the like."
2021
This Thesis intends to verify and assume the philosophical possibility of the emergence of an authentic artificial moral agent. The plausibility of overcoming the Turing Test, the Chinese Room and the Ada Lovelace Test is taken as a presupposition, as well as the possible emergence of an authentic moral artificial agent, with intentional deliberations in a first-person perspective. Thus, the thesis of the possibility of a computational code capable of giving rise to emergence is accepted. The main problem of this study will be to investigate the philosophical possibility of an artificial ethics, as a result of the will and rationality of an artificial subject, that is, of artificial intelligence as a moral subject. An artificial ethical agent must act by its own characteristics and not according to a predetermined external schedule. Authentic artificial ethics are internal and not external to the automaton. A proposed and increasingly accepted model that demonstrates this computational possibility is that of a bottom-up morality, in which case the system can independently acquire moral capabilities. This model is close to the Aristotelian ethics of virtues. Another possible way is the union of a computational floor model, with models based on deontology, with the more general formulation of duties and maxims. On the other hand, it is shown that, in at least one case, it is possible to construct a viable and autonomous model of artificial morality. There is no unequivocal demonstration of the impossibility for artificial moral agents to possess artificial emotions. The conclusion that several programming scientists have reached is that an artificial agency model founded on machine learning, combined with the ethics of virtue, is a natural, cohesive, coherent, integrated and “seamless” path. Thus, there is a coherent, consistent, and well-founded answer that indicates that the impossibility of an authentic artificial moral agent has not been proven. Finally, a responsible ethical theory must consider the concrete possibility of the emergence of full moral agents and all the consequences of this divisive phenomenon in human history.
This paper asks the question whether artificial agents are subject to moral judgements. In other words can robots be assessed as good or bad and does harming them bring moral and legal consequences. Moral judgements is a good term which also entails such notions as "rights", "dignity" and personality. Most of the essay deals with the possible criticism of the initial statement which falls into two distinct categories. The first criticism denies the personality in artificial agents. Author points out both to the general case of such a denial and to particular cases, giving counter arguments to each one. The second criticism concerns empathy and compassion which are the foundation for legal and moral norms regulating communication between the individuals. The idea according to which robots are equal to humans and should not be treated as pets is defended. The main conclusion of the paper is that artificial agents should be the subject to moral judgements and should be treated equally to humans. That is both a moral and political position.
In review, 2019
The field of "machine ethics" has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots under the heading of "artificial moral agents" (AMA). In this article, we work through philosophical assumptions, conceptual and logical variations, and identify methodological as well as conceptual problems based on an analysis of all sides in this debate. A main conclusion is that many of these problems can be better handled by moving the discussion into a more outright normative ethical territory. Rather than locking the issue to be about AMA, a more fruitfull way forward is to address to what extent both machines and human beings should be in different ways included in different (sometimes shared) practices of ascribing moral agency and responsibility.
Minds and Machines, 2004
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.
Beyond AI: Artificial Dreams Proceedings
The aim of the paper is to shed the light on the position of artificial agents in the moral space. The issue becomes more pressing as the technology development goes faster every day. It is a common matter that the moral inquiry usually comes to play when the problem already exists and there is nothing much to do about it. In this article I want to point out the importance of foregoing moral inquiry as a method of creating a friendly artificial agent in order to avoid a psychopathological one. Moral inquiry of the artificial agency can also help to settle the basis of the legal status of artificial agents. Acknowledging the rights, duties and liabilities would be another result stemming from the moral inquiry. I will introduce only the most fundamental reasons why an artificial agent (aka AA) should be a subject to a moral inquiry.
AI and Ethics
Machine Ethics emphasises the importance of collaboration between engineers, philosophers and psychologists to develop artificial intelligence-endowed systems and other 'smart' machines as artificial moral agents (AMA). They point out that there are top-down and bottom-up approaches for programming values into artificial autonomous systems. A number of thinkers argue that formalisation of the Kantian categorical imperatives is feasible, and hence, it is possible for smart machines to become Kantian moral agents, through the top-down approach of programming the Kantian categorical imperatives as algorithms into the AI systems. This paper examines some of the arguments put forth by the defendants of the possibility of Kantian AMAs such as Powers to point out that what these thinkers ignore is that in the Kantian schema, a moral agent is a rational being who is capable of 'universalising' as the law, the subjective maxims of her actions. Can the AMA be rational in this Kantian sense? The paper argues that though Kantian deontology may be attractive a theory for designing AMAs, the artificial agents cannot be Kantian moral agents in the real sense of the term.
New Ideas in Psychology, 2019
The paper addresses the question whether artificial intelligences can be moral agents. We begin by observing that philosophical accounts of moral agency, in particular Kantianism and utilitarianism, are very abstract theoretical constructions: no human being can ever be a Kantian or a utilitarian moral agent. Ironically, it is easier for a machine to approximate this idealised type of agency than it is for homo sapiens. We then proceed to outline the structure of human moral practices. Against this background, we identify two conditions of moral agency: internal and external. We argue further that the existing AI architectures are unable to meet the two conditions. In consequence, machines-at least at the current stage of their development-cannot be considered moral agents.
Ethics and Information Technology, 2018
The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take into consideration the work of Bostrom and Dietrich, who have radically assumed this viewpoint and thoroughly explored its implications. Thirdly, I present an alternative approach to AMAs—the Discontinuity Approach—which underscores an essential difference between human moral agents and AMAs by tackling the matter from another angle. In this section I concentrate on the work of Johnson and Bryson and I highlight the link between their claims and Heidegger’s and Jonas’s sug-gestions concerning the relationship between human beings and technological products. In conclusion I argue that, although the Continuity Approach turns out to be a necessary postulate to the machine ethics project, the Discontinuity Approach highlights a relevant distinction between AMAs and human moral agents. On this account, the Discontinuity Approach generates a clearer understanding of what AMAs are, of how we should face the moral issues they pose, and, finally, of the difference that separates machine ethics from moral philosophy.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
AI & Society , 2022
Springer, 2019
… of research on technoethics. New York: IGI …, 2009
AI Ethics, 2020
Minds and Machines, 2020
faculty.evansville.edu
AI & society, 1992
Discover Artificial Intelligence, 2023
Vilnius University Press, 2021
Humanities and Social Sciences Communications
The Southern Journal of Philosophy, 2022