Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021, Ethics and Information Technology
…
8 pages
1 file
In this paper I propose a Fictional Dualism model of social robots. The model helps us to understand the human emotional reaction to social robots and also acts as a guide for us in determining the significance of that emotional reaction, enabling us to better define the moral and legislative rights of social robots within our society. I propose a distinctive position that allows us to accept that robots are tools, that our emotional reaction to them can be important to their usefulness, and that this emotional reaction is not a direct indicator that robots deserve either moral consideration or rights. The positive framework of Fictional Dualism provides us with an understanding of what social robots are and with a plausible basis for our relationships with them as we bring them further into society.
Ethics of Socially Disruptive Technologies: An Introduction, 2023
Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well as the definition of humanoid robots. A key notion in this context is the idea of anthropomorphism: the human tendency to attribute human qualities, not only to our fellow human beings, but also to parts of nature and to technologies. This tendency to anthropomorphize technologies by responding to and interacting with them as if they have human qualities is one of the reasons why social robots (in particular social robots designed to look and behave like human beings) can be socially disruptive. As is explained in the chapter, while some ethics researchers believe that anthropomorphization is a mistake that can lead to various forms of deception, others — including both ethics researchers and social roboticists — believe it can be useful or fitting to treat robots in anthropomorphizing ways. The chapter explores that disagreement by, among other things, considering recent philosophical debates about whether social robots can be moral patients, that is, whether it can make sense to treat them with moral consideration. Where one stands on this issue will depend either on one’s views about whether social robots can have, imitate, or represent morally relevant properties, or on how people relate to social robots in their interactions with them. Lastly, the chapter urges that the ethics of social robots should explore intercultural perspectives, and highlights some recent research on Ubuntu ethics and social robots.
Information
A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights-civil, legal, moral, etc.-should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is "yes," I draw from some insights in the writings of Hans Jonas to defend my position.
International Scientific Conference: “Transformative Technologies: Legal and Ethical Challenges of the 21st Century”, 2020
Human-robot interactions are inherently different from interactions with other artefacts, as robots are autonomous. Furthermore, recent technological advances have also enabled robots to undertake roles that are formerly thought to be reserved for humans, e.g. as companions or lovers, since interactive abilities of robots and their autonomy are sufficient to evoke an automatic cognitive response-robot anthropomorphism. Robot anthropomorphism, the attribution of human attitudes and emotions to robots, implies that behaviours towards robots may have implications for individuals and society in the long term. Examples include manipulation of emotional attachments to robots and increase in existing privacy risks. To respond to these implications, legal orders must acknowledge that robots are no longer mere tools of human interactions, but instead parties to such interactions. This paper, examines the unique implications on law and society presented by sociable robots, anthropomorphic machines by design. First, the phenomenon of robot anthropomorphism and its effects, and then, the risks presented by the sociable robots are addressed. As such, this chapter lays out the foundation for the examination of both the legal problems arising from the autonomy of robots, and the recommendations regarding the solution of these problems.
Review of Philosophy and Psychology, 2020
Expanding the debate about empathy with human beings, animals, or fictional characters, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, Kantian, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.
Frontiers in Robotics and AI, 2022
Editorial on the Research Topic Should Robots Have Standing? The Moral and Legal Status of Social Robots In a proposal issued by the European Parliament (Delvaux, 2016) it was suggested that robots might need to be considered "electronic persons" for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with both enthusiasm and resistance. Underlying this disagreement, however, is an important moral/legal question: When (if ever) would it be necessary for robots, AI, or other socially interactive, autonomous systems to be provided with some level of moral and/or legal standing? This question is important and timely because it asks about the way that robots will be incorporated into existing social organizations and systems. Typically technological objects, no matter how simple or sophisticated, are considered to be tools or instruments of human decision making and action. This instrumentalist definition (Heidegger, 1977; Feenberg, 1991; Johnson, 2006) not only has the weight of tradition behind it, but it has so far proved to be a useful method for responding to and making sense of innovation in artificial intelligence and robotics. Social robots, however, appear to confront this standard operating procedure with new and unanticipated opportunities and challenges. Following the predictions developed in the computer as social actor studies and the media equation (Reeves and Nass, 1996), users respond to these technological objects as if they were another socially situated entity. Social robots, therefore, appear to be more than just tools, occupying positions where we respond to them as another socially significant Other. This Research Topic of Frontiers in Robotics seeks to make sense of the social significance and consequences of technologies that have been deliberately designed and deployed for social presence and interaction. The question that frames the issue is "Should robots have standing?" This question is derived from an agenda-setting publication in environmental law and ethics written by Christopher Stone, Should Trees Have Standing? Toward Legal Rights for Natural Objects (1974). In extending this mode of inquiry to social robots, contributions to this Research Topic of the journal will 1) debate whether and to what extent robots can or should have moral status and/or legal standing, 2) evaluate the benefits and the costs of recognizing social status, when it involves technological objects and artifacts, and 3) respond to and provide guidance for developing an intelligent and informed plan for the responsible integration of social robots. In order to address these matters, we have assembled a team of fifteen researchers from across the globe and from different disciplines, who bring to this conversation a wide range of viewpoints and methods of investigation. These contributions can be grouped and organized under the following four subject areas:
Review of Philosophy and Psychology, 2020
Expanding the debate about empathy with human beings, animals, or fictional characters to include human-robot relationships, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking and claims that we also ascribe to robots something like a perspectival experience. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, anti-barbarizational, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.
Philosophy & Technology, 2019
This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach inspired by virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice both to the robustness of the social responses solicited in the human users by their artificial companions and to the human fundamental interest in conceptualizing robots as mere instruments and artifacts, devoid of intrinsic moral dignity and special ontological status. We argue that the ethical approaches in favor of robots rights, emphasizing the fundamentally instrumental nature of social robots, fail to justify moral consideration for robots. To explain how the interaction of human with social robots may - in certain circumstances - be as morally relevant as the interaction with other human beings, we turn to social recognition theory. The theory allows us to acknowledge how social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-reciprocal relationships of pseudo-recognition. This recognition dynamics justifies seeing robots as worthy of moral consideration from a virtue ethical standpoint as it predicts the pre-reflective formation, in the human user’s character, of persistent affective and behavioral habits. Subsequently, like social interaction with other living beings, social interaction with robots offers to the human agents opportunities to cultivate both vices and virtues. We conclude by drawing attention to a potential paradox drawn forth by our analysis and by examining some practical implications of our approach.
i-com, 2020
The aim of this paper is to suggest a framework for categorizing social robots with respect to four dimensions relevant to an ethical, legal and social evaluation. We argue that by categorizing them thusly, we can circumvent problematic evaluations of social robots that are often based on overly broad and abstract considerations. Instead of questioning, for example, whether social robots are ethically good or bad in general, we instead propose that different configurations of (and combinations thereof) the suggested dimensions entail different paradigmatic challenges with respect to ethical, legal and social issues (ELSI). We therefore encourage practitioners to consider these paradigmatic challenges when designing social robots to find creative design solutions.
Ethics and information technology, 2010
Frontiers in Robotics and AI, 2022
This paper discusses the ethical nature of empathetic and sympathetic engagement with social robots, ultimately arguing that an entity which is engaged with through empathy or sympathy is engaged with as an "experiencing Other" and is as such due at least "minimal" moral consideration. Additionally, it is argued that extant HRI research often fails to recognize the complexity of empathy and sympathy, such that the two concepts are frequently treated as synonymous. The arguments for these claims occur in two steps. First, it is argued that there are at least three understandings of empathy, such that particular care is needed when researching "empathy" in human-robot interactions. The phenomenological approach to empathy-perhaps the least utilized of the three discussed understandings-is the approach with the most direct implications for moral standing. Furthermore, because "empathy" and "sympathy" are often conflated, a novel account of sympathy which makes clear the difference between the two concepts is presented, and the importance for these distinctions is argued for. In the second step, the phenomenological insights presented before regarding the nature of empathy are applied to the problem of robot moral standing to argue that empathetic and sympathetic engagement with an entity constitute an ethical engagement with it. The paper concludes by offering several potential research questions that result from the phenomenological analysis of empathy in human-robot interactions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Ethics and Information Technology, 2015
International Journal of Social Robotics, 2021
AI and Ethics, 2024
IEEE Technology and Society Magazine
Science and Engineering Ethics, 2019
Frontiers in Robotics and AI, 2021
Humana Mente, 2020
transcript, 2022
International Journal of Social Robotics, 2019
KnE Social Sciences, 2020
Cybernetics and Systems, 2019