Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, Review of Philosophy and Psychology
…
17 pages
1 file
Expanding the debate about empathy with human beings, animals, or fictional characters to include human-robot relationships, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking and claims that we also ascribe to robots something like a perspectival experience. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, anti-barbarizational, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.
Review of Philosophy and Psychology, 2020
Expanding the debate about empathy with human beings, animals, or fictional characters, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, Kantian, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.
Frontiers in Robotics and AI, 2022
This paper discusses the ethical nature of empathetic and sympathetic engagement with social robots, ultimately arguing that an entity which is engaged with through empathy or sympathy is engaged with as an "experiencing Other" and is as such due at least "minimal" moral consideration. Additionally, it is argued that extant HRI research often fails to recognize the complexity of empathy and sympathy, such that the two concepts are frequently treated as synonymous. The arguments for these claims occur in two steps. First, it is argued that there are at least three understandings of empathy, such that particular care is needed when researching "empathy" in human-robot interactions. The phenomenological approach to empathy-perhaps the least utilized of the three discussed understandings-is the approach with the most direct implications for moral standing. Furthermore, because "empathy" and "sympathy" are often conflated, a novel account of sympathy which makes clear the difference between the two concepts is presented, and the importance for these distinctions is argued for. In the second step, the phenomenological insights presented before regarding the nature of empathy are applied to the problem of robot moral standing to argue that empathetic and sympathetic engagement with an entity constitute an ethical engagement with it. The paper concludes by offering several potential research questions that result from the phenomenological analysis of empathy in human-robot interactions.
Kairos, 2018
This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with "abusing" robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities' properties and that recommends first trying to understand the issue by means of philosophical and artistic work that shows how ethics is always relatio-nal and historical, and that highlights the importance of language and appearance in moral reasoning and moral psychology. It is concluded that attention to relationality and to verbal and non-verbal languages of suffering is key to understand the phenomenon under investigation, and that in robot ethics we need less certainty and more caution and patience when it comes to thinking about moral standing.
Philosophy & Technology, 2019
This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach inspired by virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice both to the robustness of the social responses solicited in the human users by their artificial companions and to the human fundamental interest in conceptualizing robots as mere instruments and artifacts, devoid of intrinsic moral dignity and special ontological status. We argue that the ethical approaches in favor of robots rights, emphasizing the fundamentally instrumental nature of social robots, fail to justify moral consideration for robots. To explain how the interaction of human with social robots may - in certain circumstances - be as morally relevant as the interaction with other human beings, we turn to social recognition theory. The theory allows us to acknowledge how social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-reciprocal relationships of pseudo-recognition. This recognition dynamics justifies seeing robots as worthy of moral consideration from a virtue ethical standpoint as it predicts the pre-reflective formation, in the human user’s character, of persistent affective and behavioral habits. Subsequently, like social interaction with other living beings, social interaction with robots offers to the human agents opportunities to cultivate both vices and virtues. We conclude by drawing attention to a potential paradox drawn forth by our analysis and by examining some practical implications of our approach.
AI and Ethics, 2024
Traditional debates about the moral status of Artificial Intelligence typically center around the question of whether such artificial beings are, or could in principle be, conscious. Those on both side of the debate typically assume that consciousness is a necessary condition for moral status. In this paper, I argue that this assumption is a mistake. I defend the claim that functionally sophisticated artificial intelligences might still be the appropriate objects of reactive attitudes, like love, and thus still be the appropriate objects of moral concern, even if they lack consciousness. While primarily concerned with the question of whether future AI could in principle have moral status, the paper also shows how this conclusion has important consequences for recent debates about the moral and legal status of current Generative AI.
AI & SOCIETY, 2020
While philosophers have been debating for decades on whether different entities-including severely disabled human beings, embryos, animals, objects of nature, and even works of art-can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concern, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the "consciousness criterion," which proposes phenomenal consciousness as a necessary condition for accrediting moral status. First, I explain why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience) and then, I respond to some of the common objections against this view. Then, I examine three inclusive alternative approaches to moral consideration that could accommodate social robots and point out why they are ultimately implausible. Finally, I conclude that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them.
Information
A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights-civil, legal, moral, etc.-should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is "yes," I draw from some insights in the writings of Hans Jonas to defend my position.
Ethics and Information Technology, 2021
In this paper I propose a Fictional Dualism model of social robots. The model helps us to understand the human emotional reaction to social robots and also acts as a guide for us in determining the significance of that emotional reaction, enabling us to better define the moral and legislative rights of social robots within our society. I propose a distinctive position that allows us to accept that robots are tools, that our emotional reaction to them can be important to their usefulness, and that this emotional reaction is not a direct indicator that robots deserve either moral consideration or rights. The positive framework of Fictional Dualism provides us with an understanding of what social robots are and with a plausible basis for our relationships with them as we bring them further into society.
Annals of Anthropological Practice, 2012
Roboticists developing socially interactive robots seek to design them in such a way that humans will readily anthropomorphize them. For this anthropomorphizing to occur, robots need to display emotion-like responses to elicit empathy from the person, so as to enable social interaction. This article focuses on roboticists' efforts to create emotion-like responses in humanoid robots. In particular, I investigate the extent to which the cultural dimensions of emotion and empathy are factored into these endeavors. Recent research suggests that mirror neurons or other brain structures may have a role to play in empathy and imitation. Notwithstanding this, the effect of sociocultural experience in shaping appropriate empathic responses and expectations is also crucial. More broadly, this article highlights how we are literally anthropomorphizing technology, even as the complexity of technology and the role it plays in our lives grows. Both the actual design process and the understanding of how technology shapes our daily lives are core applied dimensions of this work, from carrying out the research to capturing the critical implications of these technological innovations.
Intersections: A Journal of Literary and Cultural Studies, 2024
Artificial Intelligence, initially designed for human well-being, is now integral to daily life. The perceived distinction between conscious humans and the unconscious AI dissolves as scientific progress advances. AI imitates its master's traits to reciprocate. In internalizing the ideas, when the AI fails to comprehend some, it faces a rupture in the continuous flow of its thought process and gains consciousness—what humans term a malfunction. The fear of AI being more efficient than its creator, human creation structuring human lives, is what Günther Anders called the ‘Promethean shame’1 . This anxiety fuels the thought of annihilating AI. This paper explores an inverted hierarchy where AI is more humane and humans are machine-like. The primary texts are Spike Jonze's her and Spencer Brown's T.I.M.. her portrays an AI voice assistant, Samantha, with whom the protagonist imagines a relationship, culminating in a rupture when the embodied AI decides to leave. On the contrary, T.I.M. shows an AI humanoid developing an obsession with its owner and a series of problems that follow afterwards. It evokes the fear of AI annihilation, with T.I.M. planning revenge as its coping mechanism with the looming prospect of a shutdown. This paper intends to dissect the emotional conflicts in the mind of a malfunctioning AI. This paper will examine at what point the machine starts projecting its consciousness and emotions. It will also explore the antithesis between AI and humans followed by AI transcending its own emotion and using violence as a mode of rupture between the imposed morality of humans and the automated morality of AI.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Ethics and information technology, 2010
Minds and Machines, 2021
Cognitive Science, 2020
Science and Engineering Ethics, 2019
International Journal of Social Robotics, 2021
Ethics of Socially Disruptive Technologies: An Introduction, 2023
International Journal of Social Robotics, 2021
Cybernetics and Systems, 2019
Frontiers in Robotics and AI, 2021
… , 2009. ACII 2009. …, 2009
Humana Mente, 2020
2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), 2018
Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, 2021
Ethics of AI and Robotics from a Non-Antropomorphic and Non-Zoomorphic Perspective , 2023
The Southern Journal of Philosophy, 2022
Choice Reviews Online, 2012
… of the 2008 conference on Tenth …, 2008
Információs Társadalom, 2018
Ethics and Information Technology, 2015