Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023, Ethics of Socially Disruptive Technologies: An Introduction
https://doi.org/10.11647/obp.0366.03…
32 pages
1 file
Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well as the definition of humanoid robots. A key notion in this context is the idea of anthropomorphism: the human tendency to attribute human qualities, not only to our fellow human beings, but also to parts of nature and to technologies. This tendency to anthropomorphize technologies by responding to and interacting with them as if they have human qualities is one of the reasons why social robots (in particular social robots designed to look and behave like human beings) can be socially disruptive. As is explained in the chapter, while some ethics researchers believe that anthropomorphization is a mistake that can lead to various forms of deception, others — including both ethics researchers and social roboticists — believe it can be useful or fitting to treat robots in anthropomorphizing ways. The chapter explores that disagreement by, among other things, considering recent philosophical debates about whether social robots can be moral patients, that is, whether it can make sense to treat them with moral consideration. Where one stands on this issue will depend either on one’s views about whether social robots can have, imitate, or represent morally relevant properties, or on how people relate to social robots in their interactions with them. Lastly, the chapter urges that the ethics of social robots should explore intercultural perspectives, and highlights some recent research on Ubuntu ethics and social robots.
Frontiers in Robotics and AI, 2022
Editorial on the Research Topic Should Robots Have Standing? The Moral and Legal Status of Social Robots In a proposal issued by the European Parliament (Delvaux, 2016) it was suggested that robots might need to be considered "electronic persons" for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with both enthusiasm and resistance. Underlying this disagreement, however, is an important moral/legal question: When (if ever) would it be necessary for robots, AI, or other socially interactive, autonomous systems to be provided with some level of moral and/or legal standing? This question is important and timely because it asks about the way that robots will be incorporated into existing social organizations and systems. Typically technological objects, no matter how simple or sophisticated, are considered to be tools or instruments of human decision making and action. This instrumentalist definition (Heidegger, 1977; Feenberg, 1991; Johnson, 2006) not only has the weight of tradition behind it, but it has so far proved to be a useful method for responding to and making sense of innovation in artificial intelligence and robotics. Social robots, however, appear to confront this standard operating procedure with new and unanticipated opportunities and challenges. Following the predictions developed in the computer as social actor studies and the media equation (Reeves and Nass, 1996), users respond to these technological objects as if they were another socially situated entity. Social robots, therefore, appear to be more than just tools, occupying positions where we respond to them as another socially significant Other. This Research Topic of Frontiers in Robotics seeks to make sense of the social significance and consequences of technologies that have been deliberately designed and deployed for social presence and interaction. The question that frames the issue is "Should robots have standing?" This question is derived from an agenda-setting publication in environmental law and ethics written by Christopher Stone, Should Trees Have Standing? Toward Legal Rights for Natural Objects (1974). In extending this mode of inquiry to social robots, contributions to this Research Topic of the journal will 1) debate whether and to what extent robots can or should have moral status and/or legal standing, 2) evaluate the benefits and the costs of recognizing social status, when it involves technological objects and artifacts, and 3) respond to and provide guidance for developing an intelligent and informed plan for the responsible integration of social robots. In order to address these matters, we have assembled a team of fifteen researchers from across the globe and from different disciplines, who bring to this conversation a wide range of viewpoints and methods of investigation. These contributions can be grouped and organized under the following four subject areas:
International Journal of Social Robotics, 2021
Social robots are designed to facilitate interaction with humans through "social" behavior. As literature in the field of human-robot interaction shows, this sometimes leads to "bad" behavior towards the robot or "abuse" of the robot. Virtue ethics offers a helpful way to capture the intuition that although nobody is harmed when a robot is "mistreated", there is still something wrong with this kind of behavior: it damages the moral character of the person engaging in that behavior, especially when it is habitual. However, one of the limitations of current applications of virtue ethics to robots and technology is its focus on the individual and individual behavior and insufficient attention to temporal and bodily aspects of virtue. After positioning its project in relation to the work of Shannon Vallor and Robert Sparrow, the present paper explores what it would mean to interpret and apply virtue ethics in a more social and relational way and a way that takes into account the link between virtue and the body. In particular, it proposes (1) to use the notion of practice as a way to conceptualize how the individual behavior, the virtue of the person, and the technology in question are related to their wider social-practical context and history, and (2) to use the notions of habit and performance conceptualize the incorporation and performance of virtue. This involves use of the work of MacIntyre, but revised by drawing on Bourdieu's notion of habit in order to highlight the temporal, embodiment, and performative aspect of virtue. The paper then shows what this means for thinking about the moral standing of social robots, for example for the ethics of sex robots and for evaluating abusive behaviors such as kicking robots. The paper concludes that this approach does not only give us a better account of what happens when people behave "badly" towards social robots, but also suggests a more comprehensive virtue ethics of technology that is fully relational, performance-oriented, and able to not only acknowledges but also theorize the temporal and bodily dimension of virtue.
Information
A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights-civil, legal, moral, etc.-should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is "yes," I draw from some insights in the writings of Hans Jonas to defend my position.
mass produce Digit, their humanoid robot (Kolodny 2023). In a New York Times article commenting on the opening of the factory of Agility Robotics, there is a sentence: "the long-anticipated robot revolution has begun" (Howe and Antaya 2023). As we can see from the selection of the recent news, humanoid robots might soon populate our world. At the same time, the ethical literature about human-like robots is full of concerns and underlying risks of having such robots around. For example, Alsegier points out that human-like robots are "one of the most controversial facets of modern technology" (Alsegier 2016: 24). Russell notes that "there is no good reason for robots to have humanoid form. There are also good, practical reasons not to have humanoid form" (Russell 2019: 126). Darling, in her book about robots, points out, "The main problem of anthropomorphism in robotics is that, right now, we aren't treating it as a matter of contention" (Darling 2021: 155). She believes that there is not enough discussion about this and that we are deploying robots without fully understanding the impact of anthropomorphism on people. The more human-like the robot is, the easier it is to anthropomorphize it (Gasser 2021: 334). The fact that we have already got used to robotic vacuum cleaners, smart speakers, and delivery robots does not mean that the natural step is to accept human-like robots. Humanoid robots, with their human likeness, bring additional ethically relevant issues that should be discussed first. In this book, I focus on the ethically qualitative shift in designing robots that resemble humans. Throughout the book many questions will be asked that relate to ethical issues of human likeness; among these are: Is it safe to have human-like robots around us? Who would human-like robots represent, and why should it be a matter of concern? To what extent are human robots achievable? Is it ethical to have too human-like robots? Could robots have human-like ethics? Could robots be responsible in a human-like way? How should we treat robots that look like us? How should we treat robots that are like us? How do we mitigate the risks resulting from human-like robots? All these questions will be covered in the chapters that follow. In recent years, numerous books with a focus on the ethical aspects of robots have been published. Besides the already mentioned book by Kate Darling, there are many other great books published (e.g.
IEEE Technology and Society Magazine
i-com, 2020
The aim of this paper is to suggest a framework for categorizing social robots with respect to four dimensions relevant to an ethical, legal and social evaluation. We argue that by categorizing them thusly, we can circumvent problematic evaluations of social robots that are often based on overly broad and abstract considerations. Instead of questioning, for example, whether social robots are ethically good or bad in general, we instead propose that different configurations of (and combinations thereof) the suggested dimensions entail different paradigmatic challenges with respect to ethical, legal and social issues (ELSI). We therefore encourage practitioners to consider these paradigmatic challenges when designing social robots to find creative design solutions.
This work examines humanoid social robots in Japan and the North America with a view to comparing and contrasting the projects cross culturally. In North America, I look at the work of Cynthia Breazeal at the Massachusetts Institute of Technology and her sociable robot project: Kismet. In Japan, at the Osaka University, I consider the project of Hiroshi Ishiguro: Repliée-Q2. I first distinguish between utilitarian and affective social robots. Then, drawing on published works of Breazeal and Ishiguro I examine the proposed vision of each project. Next, I examine specific characteristics (embodied and social intelligence, morphology and aesthetics, and moral equivalence) of Kismet and Repliée with a view to comparing the underlying concepts associated with each. These features are in turn connected to the societal preconditions of robots generally. Specifically, the role that history of robots, theology/ spirituality, and popular culture plays in the reception and attitude toward robots is considered.
KnE Social Sciences, 2020
This paper aims to show the possible and actual synergies between social robotics and sociology. The author argues that social robots are one of the best fields of inquiry to provide a bridge between the two cultures – the one represented by the social sciences and the humanities on the one hand, and the one represented by the natural sciences and engineering on the other. To achieve this result, quantitative and qualitative analyses are implemented. By using scientometric tools like Ngram Viewer, search engines such as Google Scholar, and hand calculations, the author detects the emergence of the term-and-concept ‘social robots’ in its current use, the absolute and relative frequencies of this term in the scientific literature in the period 1800-2008, the frequency distribution of publications including this term in the period 2000-2019, and the magnitude of publications in which the term ‘social robots’ is associated to the term ‘sociology’ or 'social work'. Finally, employing qualitative analysis and focusing on exemplary cases, this paper shows different ways of implementing researches that relate sociology to robotics, from a theoretical or instrumental point of view. It is argued that sociologists and engineers could work in a team to observe, analyze, and describe the interaction between humans and social robots, by using research techniques and theoretical frames provided by sociology. In turn, this knowledge can be used to build more effective and humanlike social robots.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Philosophy & Technology, 2019
Science and Engineering Ethics, 2019
International Journal of Social Robotics, 2019
transcript, 2022
Frontiers in Robotics and AI, 2021
Ethics and information technology, 2010
Sociable Robots and the Future of Social Relations, 2014
International Scientific Conference: “Transformative Technologies: Legal and Ethical Challenges of the 21st Century”, 2020
Frontiers in Robotics and AI, 2022
Choice Reviews Online, 2012
Review of Philosophy and Psychology, 2020
Global Japanese Studies Review, 2015