Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016
…
10 pages
1 file
We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele's history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history as products of engineering would undermine their autonomy and thus responsibility.
Frontiers in Robotics and AI
It is almost a foregone conclusion that robots cannot be morally responsible agents, both because they lack traditional features of moral agency like consciousness, intentionality, or empathy and because of the apparent senselessness of holding them accountable. Moreover, although some theorists include them in the moral community as moral patients, on the Strawsonian picture of moral community as requiring moral responsibility, robots are typically excluded from membership. By looking closely at our actual moral responsibility practices, however, I determine that the agency reflected and cultivated by them is limited to the kind of moral agency of which some robots are capable, not the philosophically demanding sort behind the traditional view. Hence, moral rule-abiding robots (if feasible) can be sufficiently morally responsible and thus moral community members, despite certain deficits. Alternative accountability structures could address these deficits, which I argue ought to be ...
… of the 2008 conference on Tenth …, 2008
Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making autonomous decisions gives us reasons to talk about a system (an artifact) as being "responsible" for a task. No doubt, technology is morally significant for humans, so the "responsibility for a task" with moral consequences could be seen as moral responsibility. Intelligent systems can be seen as parts of socio-technological systems with distributed responsibilities, where responsible (moral) agency is a matter of degree. Knowing that all possible abnormal conditions of a system operation can never be predicted, and no system can ever be tested for all possible situations of its use, the responsibility of a producer is to assure proper functioning of a system under reasonably foreseeable circumstances. Additional safety measures must however be in place in order to mitigate the consequences of an accident. The socio-technological system aimed at assuring a beneficial deployment of intelligent systems has several functional responsibility feedback loops which must function properly: the awareness and procedures for handling of risks and responsibilities on the side of designers, producers, implementers and maintenance personnel as well as the understanding of society at large of the values and dangers of intelligent technology. The basic precondition for developing of this socio-technological control system is education of engineers in ethics and keeping alive the democratic debate on the preferences about future society.
Connection Science
Can robots be moral agents? And why should we care? Principle: Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy. This principle highlights the need for humans to accept responsibility for robot behaviour and in that it is commendable. However it raises further questions about legal and moral responsibility. The issues considered here are (i) the reasons for assuming that humans and not robots are responsible agents (ii) whether it is sufficient to design robots to comply with existing laws and human rights and (iii) the implications, for robot deployment, of the assumption that robots are not morally responsible.
Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that, they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human-robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.
Frontiers in Computer Science, 2022
There is much discussion about super artificial intelligence (AI) and autonomous machine learning (ML) systems, or learning machines (LM). Yet, the reality of thinking robotics still seems far on the horizon. It is one thing to define AI in light of human intelligence, citing the remoteness between ML and human intelligence, but another to understand issues of ethics, responsibility, and accountability in relation to the behavior of autonomous robotic systems within a human society. Due to the apparent gap between a society in which autonomous robots are a reality and present-day reality, many of the efforts placed on establishing robotic governance, and indeed, robot law fall outside the fields of valid scientific research. Work within this area has concentrated on manifestos, special interest groups and popular culture. This article takes a cognitive scientific perspective toward characterizing the nature of what true LMs would entail—i.e., intentionality and consciousness. It the...
idt.mdh.se
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.
Ethics and Information Technology, 2017
The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. 1) It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. 2) It then considers three instances where recent innovations in robotics challenge this standard operating procedure by opening gaps in the usual way of assigning responsibility. The innovations considered in this section include: autonomous technology, machine learning, and social robots. 3) The essay concludes by evaluating the three different responses—instrumentalism 2.0, machine ethics, and hybrid responsibility—that have been made in face of these difficulties in an effort to map out the opportunities and challenges of and for responsible robotics.
Journal of Business Ethics, 2022
Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, and in the military, such as Lethal Autonomous Weapon Systems. However, there is a scarcity of in-depth theoretical, methodological, or empirical studies that address these ethical issues, for instance, the impact of morality and where accountability resides in AI robots’ use. To address this dearth, this study offers a conceptual framework that interpretively develops the ethical implications of AI robot applications, drawing on descriptive and normative ethical theory. The new framework elaborates on how the locus of morality (human to AI agency) and moral intensity combine within context-specific AI robot applications, and how this might influence accountability thinking. Our theorization indicates that in situations of escalating AI agency and situational moral intensity, ac...
Robots and Moral Obligations. In: What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016/TRANSOR 2016, 290, 2016
Using Roger Crisp's [1] arguments for well-being as the ultimate source of moral reasoning, this paper argues that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right. Although these moral concepts should not be used to program robots, they are not to be abandoned by humans since there are still reasons to keep using them, namely: as an assessment of the agent, to take a stand or to motivate and reinforce behaviour. Because robots are completely rational agents they don't need these additional motivations, they can suffice with a concept of what promotes well-being. How a robot knows which action promotes well-being to the greatest degree is still up for debate, but a combination of top-down and bottom-up approaches seem to be the best way.
Ethics and information technology, 2010
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
The Southern Journal of Philosophy, 2022
Science and Engineering Ethics, 2019
Information
Ethics in Progress, 2019
IEEE Technology and Society Magazine
In review, 2019
Royakkers, L., & Olsthoorn, P. (2014). Military Robots and the Question of Responsibility. International Journal of Technoethics (IJT), 5(1), 1-14.
Robot and Human …, 2008
International Journal of Social Robotics, 2019
Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021