Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
17 pages
1 file
This essay explores the emotional and ethical implications of artificial intelligence as seen through the lens of the movie featuring Cybertronics' Mecha child, David. It contrasts the emotional complexity of David's character with philosophical arguments by Brooks and Haugeland regarding human-level intelligence and the potential of AI to emulate human behaviors such as love, fear, and reasoning. By challenging prevailing notions of AI's limitations, the essay engages with deeper questions about the essence of humanity, the nature of intelligence, and our perceptions of artificial beings.
Review of Philosophy and Psychology, 2020
Expanding the debate about empathy with human beings, animals, or fictional characters to include human-robot relationships, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking and claims that we also ascribe to robots something like a perspectival experience. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, anti-barbarizational, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.
Ethics in Progress, 2018
With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or rather “Artificial People” (AP). The paper rejects this forecast, due to its argument based on three metaphysical conflicting assumptions. Firstly, it is the idea that it is possible to precisely define persons and apply the definition to robots or use it to differentiate human beings from robots. Further, the argument of APs favors a position of non-reductive physicalism (second assumption) and materialism (third assumption), finally producing weird convictions about future robotics. Therefore, I will suggest to follow Christine Korsgaard’s defence of animals as ends in themselves with moral standing. I will show that her argument can be transmitted to robots, too, at least to robots which are capable of pursuing their own good (even if they are not rational). Korsgaard’s interpretation of Kant delivers an option that allows us to leave out complicated metaphysical notions like “person” or “subject” in the debate, without denying robots’ status as agents.
Review of Philosophy and Psychology, 2020
Expanding the debate about empathy with human beings, animals, or fictional characters, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, Kantian, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.
Film-Philosophy
Steven Spielberg's AI-Artificial Intelligence (2001), and Alex Proyas's neo-noir, I, Robot (2004), may both be understood as attempts to answer the question: 'What conditions does artificial intelligence research have to satisfy before it can justly claim to have produced something (a 'robot') which truly simulates a human being?' 1 I would like to show that, far from construing this question simply in terms of intelligence, the films in question demonstrate that far more than this is at stake, and each articulates the 'more' in different, but related, terms. Moreover, contrary to what viewers may suspect, neither film claims that the achievement of this goal is actualisable; rather, it posits a goal for artificial intelligence research by which it could measure its (lack of) progress. 2 Film-Philosophy, 12.2
IEEE Technology and Society Magazine
AI and Ethics, 2024
Traditional debates about the moral status of Artificial Intelligence typically center around the question of whether such artificial beings are, or could in principle be, conscious. Those on both side of the debate typically assume that consciousness is a necessary condition for moral status. In this paper, I argue that this assumption is a mistake. I defend the claim that functionally sophisticated artificial intelligences might still be the appropriate objects of reactive attitudes, like love, and thus still be the appropriate objects of moral concern, even if they lack consciousness. While primarily concerned with the question of whether future AI could in principle have moral status, the paper also shows how this conclusion has important consequences for recent debates about the moral and legal status of current Generative AI.
Cognitive Science, 2020
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent, benevolent), and additionally varied the type of agent (robotic, human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human-like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
International Journal of Social Robotics, 2019
The Montréal Declaration for Responsible Development of Artificial Intelligence states that emerging technologies ought not "encourage cruel behaviour towards robots that take on the appearance of human beings or animals and act in a similar fashion." The idea of a causal link between cruelty and kindness to artiicial and living beings, human or animal, is controversial and underexplored, despite its increasing relevance to robotics. Kate Darling recently marshalled Immanuel Kant's argument-that cruelty to animals promotes cruelty to people-to argue for an analogous link concerning social robots. Others, such as Johnson and Verdicchio, have counter-argued that animal analogies are often lawed, partly because they ignore social robots' true nature, including their lack of sentience. This, they say, weakens Darling's argument that social robots will have virtue-promoting or vice-promoting efects regarding our treatment of living beings. Certain ideas in this debate, including those of anthropomorphism, projection, animal analogies, and Kant's causal claim, require clariication and critical attention. Concentrating on robot animals, this paper examines strengths and weaknesses on both sides of this argument. It inds there is some reason for thinking that social robots may causally afect virtue, especially in terms of the moral development of children and responses to nonhuman animals. This conclusion has implications for future robot design and interaction.
Proceedings of the AAAI Conference on Artificial Intelligence, 22, 2007
This paper analyzes the ethical and axiological implications of the interaction between human persons and robots. Special attention is paid to issues of the depersonalization of society and the devaluing of our natural environment and to the issue of taking robots personally, i.e., relating to them and treating them as if they are persons. The philosophical frameworks of personalism and process philosophy are used as a lens through which to make this analysis.
2005
Abstract We set out to test if the Media Equation could be applied to robots too, especially with negative behavior. This would mean that we, humans, are inclined to treat robots as same as we would another human-being. To do so we replicated an experiment done by Stanley Milgram in 1965. With this experiment Milgram tested how far people would go in torturing another person. We performed the experiment with a robot instead of a human-victim and compared the results of the two experiments.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Intersections: A Journal of Literary and Cultural Studies, 2024
Philosophy & Technology, 2019
Journal of Moral Theology
Lecture Notes in Business Information Processing
ACM Transactions on Human-Robot Interaction, 2018
Ethics in Progress, 2019
Ethics of Socially Disruptive Technologies: An Introduction, 2023
Ethics and Information Technology, 2021
Choice Reviews Online, 2012
Discussing Borders, Escaping Traps: Transdisciplinary and Transspatial Approaches, 2019
Human-Machine Communication, 2024
Science and Engineering Ethics, 2019
Int. Journal of Psychological Review, 2020
Traditions and Transitions. Volume One , 2020