Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019, Springer
…
35 pages
1 file
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins’ (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique.
Science and Engineering Ethics
Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.
Kritike, 2018
This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts.
2018
The increasing pervasiveness, autonomy and complexity of artificially intelligent technologies in human society has challenged the traditional conception of moral responsibility. To this extent, it has been proposed that the existing notion of moral responsibility be expanded in order to be able to account for the morality of technologies. Machine ethics is the field of study dedicated to studying the computational entity as a moral entity whose goal is to develop technologies capable of autonomous moral reasoning, namely artificial moral agents. This thesis begins by surveying the basic assumptions and definitions underlying this conception of artificial moral agency. It is followed by an investigation into why (and how) society would benefit from the development of such agents. Finally, it explores the main approaches for the development of artificial moral agents. In effect, this research serves as a critique on the emerging field of machine ethics.
AI Ethics, 2020
Humans should never relinquish moral agency to machines, and machines should be 'aligned' with human values; but we also need to consider how broad assumptions about our moral capacities and the capabilities of AI, impact on how we think about AI and ethics. Consideration of certain approaches, such as the idea that we might programme our ethics into machines, may rest upon a tacit assumption of our own moral progress. Here I consider how broad assumptions about morality act to suggest certain approaches in addressing the ethics of AI. Work in the ethics of AI would benefit from closer attention not just to what our moral judgements should be, but also to how we deliberate and act morally: the process of moral decisionmaking. We must guard against any erosion of our moral agency and responsibilities. Attention to the differences between humans and machines, alongside attention to ways in which humans fail ethically, could be useful in spotting specific, if limited, ways that AI assist us to advance our moral agency.
Machine ethics are quickly becoming an important part of artificial intelligence research. We argue that attempts to attribute moral agency to intelligent machines are misguided, whether applied to infrahuman or superhuman AIs. Humanity should not put its future in the hands of the machines that do not do exactly what we want them to, since we will not be able to take power back. In general, a machine should never be in a position to make any non-trivial ethical or moral judgments concerning people unless we are confident, preferably with mathematical certainty, that these judgments are what we truly consider ethical.
Conatus
At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, meta...
2021
This Thesis intends to verify and assume the philosophical possibility of the emergence of an authentic artificial moral agent. The plausibility of overcoming the Turing Test, the Chinese Room and the Ada Lovelace Test is taken as a presupposition, as well as the possible emergence of an authentic moral artificial agent, with intentional deliberations in a first-person perspective. Thus, the thesis of the possibility of a computational code capable of giving rise to emergence is accepted. The main problem of this study will be to investigate the philosophical possibility of an artificial ethics, as a result of the will and rationality of an artificial subject, that is, of artificial intelligence as a moral subject. An artificial ethical agent must act by its own characteristics and not according to a predetermined external schedule. Authentic artificial ethics are internal and not external to the automaton. A proposed and increasingly accepted model that demonstrates this computational possibility is that of a bottom-up morality, in which case the system can independently acquire moral capabilities. This model is close to the Aristotelian ethics of virtues. Another possible way is the union of a computational floor model, with models based on deontology, with the more general formulation of duties and maxims. On the other hand, it is shown that, in at least one case, it is possible to construct a viable and autonomous model of artificial morality. There is no unequivocal demonstration of the impossibility for artificial moral agents to possess artificial emotions. The conclusion that several programming scientists have reached is that an artificial agency model founded on machine learning, combined with the ethics of virtue, is a natural, cohesive, coherent, integrated and “seamless” path. Thus, there is a coherent, consistent, and well-founded answer that indicates that the impossibility of an authentic artificial moral agent has not been proven. Finally, a responsible ethical theory must consider the concrete possibility of the emergence of full moral agents and all the consequences of this divisive phenomenon in human history.
Humanities and Social Sciences Communications
Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility f...
2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 2021
Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named 'augmented intelligence'. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged 'Cybernetics' with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an 'agency' in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over 'the problem of ethical agency in AI'. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate 'the problem of moral agency in AI' from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
AI & Society, 2020
New Ideas in Psychology, 2019
Advances in Robotics & Mechanical Engineering, 2019
AI & SOCIETY, 2017
Belgian/Netherlands Artificial Intelligence Conference, 2012
Ethics and Information Technology, 2018
Minds and Machines, 2017
In review, 2019
Frontiers in Psychology, 2023
Handbook on the Ethics of Artificial Intelligence (Eds. David Gunkel), 97-112 , 2024
AI & SOCIETY, 2013
Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, 2021