Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005
…
4 pages
1 file
Abstract We set out to test if the Media Equation could be applied to robots too, especially with negative behavior. This would mean that we, humans, are inclined to treat robots as same as we would another human-being. To do so we replicated an experiment done by Stanley Milgram in 1965. With this experiment Milgram tested how far people would go in torturing another person. We performed the experiment with a robot instead of a human-victim and compared the results of the two experiments.
2005
Abstract. Robots become increasingly important in our society, but their social role remains unclear. The Media Equation states that people treat computers as social actors, and is likely to apply to robots. This study investigates the limitations of the Media Equation in human-robot interaction by focusing on robot abuse. Milgram's experiment on obedience was reproduced using a robot in the role of the student. All participants went through up to the highest voltage setting, compared to only 40% in Milgram's original study.
Cognitive Science, 2020
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent, benevolent), and additionally varied the type of agent (robotic, human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human-like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
Technological Forecasting & Social Change, 2023
An assortment of kinds of attacks and aggressive behaviors toward artificial intelligence (AI)-enhanced robots has recently emerged. This paper explores questions of how the human emotions and motivations involved in attacks of robots are being framed as well as how the incidents are presented in social media and traditional broadcast channels. The paper analyzes how robots are construed as the "other" in many contexts, often akin to the perspectives of "machine wreckers" of past centuries. It argues that focuses on the emotions and motivations of robot attackers can be useful in mitigating anti-robot activities. "Hate crime" or "hate incident" characterizations of some anti-robot efforts should be utilized in discourse as well as some future legislative efforts. Hate crime framings can aid in identifying generalized antagonism and antipathy toward robots as autonomous and intelligent entities in the context of antirobot attacks. Human self-defense may become a critical issue in some anti-robot attacks, especially when apparently malfunctioning robots are involved. Attacks of robots present individuals with vicarious opportunities to participate in anti-robot activity and also potentially elicit other aggressive, copycat actions as videos and narrative accounts are shared via social media as well as personal networks.
2021
As robots may take a greater part in our moral decision-making processes, whether people hold them accountable for moral harm becomes critical to explore. Blame and punishment signify moral accountability, often involving emotions. We quantitatively looked into people’s willingness to blame or punish an emotional vs. nonemotional robot that admits to its wrongdoing. Studies 1 and 2 (online video interaction) showed that people may punish a robot due to its lack of perceived emotional capacity than its perceived agency. Study 3 (in the lab) demonstrated that people were neither willing to blame nor punish the robot. Punishing non-emotional robots seems more likely than blaming them, yet punishment towards robots is more likely to arise online than offline. We reflect on if and why victimized humans (and those who care for them) may seek out retributive justice against robot scapegoats when there are no humans to hold accountable for moral harm.
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
As robots may take a greater part in our moral decision-making processes, whether people hold them accountable for moral harm becomes critical to explore. Blame and punishment signify moral accountability, often involving emotions. We quantitatively looked into people's willingness to blame or punish an emotional vs. nonemotional robot that admits to its wrongdoing. Studies 1 and 2 (online video interaction) showed that people may punish a robot due to its lack of perceived emotional capacity than its perceived agency. Study 3 (in the lab) demonstrated that people were neither willing to blame nor punish the robot. Punishing non-emotional robots seems more likely than blaming them, yet punishment towards robots is more likely to arise online than offline. We reflect on if and why victimized humans (and those who care for them) may seek out retributive justice against robot scapegoats when there are no humans to hold accountable for moral harm.
In this chapter, I ask whether we can coherently conceive of robots as moral agents and as moral patients. I answer both questions negatively but conditionally: for as long as robots lack certain features, they can be neither moral agents nor moral patients. These answers, of course, are not new. They have, yet, recently been the object of sustained critical attention (Coeckelbergh 2014; Gunkel 2014). The novelty of this contribution, then, resides in arriving at these precise answers by way of arguments that avoid these recent challenges. This is achieved by considering the psychological and biological bases of moral practices and arguing that the relevant differences in such bases are sufficient, for the time being, to exclude robots from adopting, both, an active and a passive moral role.
ETHICS IN PROGRESS, 2021
The main objective of this paper is to discuss people’s expectations towards social robots’ moral attitudes. Conclusions are based on the results of three selected empirical studies which used stories of robots (and humans) acting in hypothetical scenarios to assess the moral acceptance of their attitudes. The analysis indicates both the differences and similarities in expectations towards the robot and human attitudes. Decisions to remove someone’s autonomy are less acceptable from robots than from humans. In certain circumstances, the protection of a human’s life is considered more morally right than the protection of the robot’s being. Robots are also more strongly expected to make utilitarian choices than human agents. However, there are situations in which people make consequentialist moral judgements when evaluating both the human and the robot decisions. Both robots and humans receive a similar overall amount of blame. Furthermore, it can be concluded that robots should prote...
International Journal of Social Robotics, 2019
Robots (and computers) are increasingly being used in scenarios where they interact socially with people. How people react to these agents is telling about the perceived empathy of such agents. Mistreatment of robots (or computers) by co-workers might provoke such telling reactions. This study examines perceived mistreatment directed towards a robot in comparison to a computer. This will provide some understanding of how people feel about robots in collaborative social settings. We conducted a two by two between-subjects study with 80 participants. Participants worked cooperatively with either a robot or a computer agent. An experiment confederate would either act aggressively or neutrally towards the agent. We hypothesized that people would not perceive aggressive speech as mistreatment when an agent was capable of emotional feelings and similar to themselves; that participants would perceive the robot as more similar in appearance and emotionally capable to themselves than a compu...
ArXiv, 2018
The emergence of agentic technologies (e.g., robots) in increasingly public realms (e.g., social media) has revealed surprising antisocial tendencies in human-agent interactions. In particular, there is growing indication of people's propensity to act aggressively towards such systems - without provocation and unabashedly so. Towards understanding whether this aggressive behavior is anomalous or whether it is associated with general antisocial tendencies in people's broader interactions, we examined people's verbal disinhibition towards two artificial agents. Using Twitter as a corpus of free-form, unsupervised interactions, we identified 40 independent Twitter users who tweeted abusively or non-abusively at one of two high-profile robots with Twitter accounts (TMI's Bina48 and Hanson Robotics' Sophia). Analysis of 50 of each user's tweets most proximate to their tweet at the respective robot (N=2,000) shows people's aggression towards the robots to be as...
International Journal of Social Robotics, 2019
Interacting with Computers, 2008
Frontiers in Robotics and AI, 2022
Ethics of Socially Disruptive Technologies: An Introduction, 2023
AI & Society, 2021
Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction - HRI '12, 2012
Science and Engineering Ethics, 2019
2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), 2018
2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016
AI and Ethics, 2024
Frontiers in Robotics and AI, 2021
Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
Philosophy & Technology, 2019