Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2022
https://doi.org/10.48550/arXiv.2207.12555…
3 pages
1 file
Social robotics' development for the practice of care and European prospects to incorporate these AI-based systems in institutional healthcare contexts call for an urgent ethical reflection to (re)configurate our practical life according to human values and rights. Despite the growing attention to the ethical implications of social robotics, the current debate on one of its central branches, social assistive robotics (SAR), rests upon an impoverished ethical approach. This paper presents and examines some tendencies of this prevailing approach, which have been identified as a result of a critical literature review. Based on this analysis of a representative case of how ethical reflection is being led towards social robotics, some future research lines are outlined, which may help reframe and deepen in its ethical implications.
Technology in Society, 2021
Along with its potential contributions to the practice of care, social assistive robotics raises significant ethical issues. The growing development of this technoscientific field of intelligent robotics has thus triggered a widespread proliferation of ethical attention towards its disruptive potential. However, the current landscape of ethical debate is fragmented and conceptually disordered, endangering ethics' practical strength for normatively addressing these challenges. This paper presents a critical literature review of the ethical issues of social assistive robotics, which provides a comprehensive and intelligible overview of the current ethical approach to this technoscientific field. On the one hand, ethical issues have been identified, quantitatively analyzed and categorized in three main thematic groups. Namely: Well-being, Care, and Justice. On the other hand-and on the basis of some significant disclosed tendencies of the current approach-, future lines of research and issues regarding the enrichment of the ethical gaze on social assistive robotics have been identified and outlined.
IEEE Robotics and Automation Magazine, 2011
The health, education, and other service applications for robots that assist through primarily social rather than physical interaction are rapidly growing, and so is the research into such technologies. Socially assistive robotics (SAR) aims to address critical areas and gaps in care by automating supervision, coaching, motivation, and companionship aspects of one-on-one interactions with individuals from various large and growing populations, including stroke survivors, the elderly and individuals with dementia, children with autism spectrum disorders, among many others. In this way, roboticists hope to improve the standard of care for large user groups. Naturally, SAR systems pose several ethical challenges regarding their design, implementation, and deployment. This paper examines the ethical challenges of socially assistive robotics from three points of view (user, caregiver, peer) using core principles from medical ethics (autonomy, beneficence, non-maleficence, justice) to determine how intended and unintended effects of a SAR can impact the delivery of care.
Journal of Medical Ethics, 2019
Different embodiments of technology permeate all layers of public and private domains in society. In the public domain of aged care, attention is increasingly focused on the use of Socially Assistive Robots (SARs) supporting caregivers and older adults to guarantee that older adults receive care. The introduction of SARs in aged-care contexts is joint by intensive empirical and philosophical research. Although these efforts merit praise, current empirical and philosophical research are still too far separated. Strengthening the connection between these two fields is crucial to have a full understanding of the ethical impact of these technological artefacts. To bridge this gap, we propose a philosophical-ethical framework for SAR use, one that is grounded in the dialogue between empirical-ethical knowledge about and philosophicalethical reflection on SAR use. We highlight the importance of considering the intuitions of older adults and their caregivers in this framework. Grounding philosophical-ethical reflection in these intuitions opens the ethics of SAR use in aged care to its own socio-historical contextualisation. Referring to the work of Margaret Urban Walker, Joan Tronto, and Andrew Feenberg, it is argued that this socio-historical contextualisation of the ethics of SAR use already has strong philosophical underpinnings. Moreover, this contextualisation enables us to formulate a rudimentary decision-making process about SAR use in aged care which rests on three pillars: (1) stakeholders' intuitions about SAR use as sources of knowledge; (2) interpretative dialogues as democratic spaces to discuss the ethics of SAR use; (3) the concretisation of ethics in SAR use.
2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012
We use a long-term study of a robotic eating-aid for disabled users to illustrate how empirical use give rise to a set of ethical issues that might be overlooked in ethic discussions based on theoretical extrapolation of the current state-of-the-art in robotics. This approach provides an important complement to the existing robot ethics by revealing new issues as well as providing actionable guidance for current and future robot design. We discuss our material in relation to the literature on robot ethics, specifically the risk of robots performing care taking tasks and thus causing increased isolation for care recipients. Our data identifies a different set of ethical issues such as independence, privacy, and identity where robotics, if carefully designed and developed, can make positive contributions. Robots are becoming increasingly ubiquitous and the field is currently under rapid development. On one hand this is highly appreciated since they replace humans in performing tedious, repetitive, and even dangerous tasks, and on the other hand they are criticized for leaving human workers without jobs and undermining their competence. This tension illustrates the complexity of our attitudes, understanding of, and relation to robots, partly originating from influences from fiction. Books such as Asimov's I, Robot, movies such as the Star Wars series, Terminator, and RoboCop, and computer games like Portal and Mass Effect, paint vivid portraits of highly skilled robots that sometimes cannot be distinguished from humans. In addition, the most common theme in robot fiction being robots taking over the world, suppressing humans, thus ending our way of life as we know it creates an underlying skepticism and fear that shapes our attitudes toward robots (Ferneus et al., 2009).
Humanities & Social Sciences Communications, 2023
The development of care robots has been accompanied by a number of technical and social challenges, which are guided by the question: "What is a robot for?" Debates guided by this question have discussed the functionalities and tasks that can be delegated to a machine that does not harm human dignity. However, we argue that these ethical debates do not offer any alternatives for designing care robots for the common good. In particular, we stress the need to shift the current ethical discussion on care robots towards a reflection on the politics of robotics, understanding politics as the search for the common good. To develop this proposal, we use the theoretical perspective of science and technology studies, which we integrate into the analysis of disagreement inspired by a consensusdissensus way of thinking, based on discussing and rethinking the relationships of care robots with the common good and the subjects of such good. Thus, the politics of care robots allows for the emergence of a set of discussions on how human-machine configurations are designed and practiced, as well as the role of the market of technological innovation in the organisation of care.
Work in the field of machine medical ethics, especially as it applies to healthcare robots, generally focuses attention on controlling the decision making capabilities and actions of autonomous machines for the sake of respecting the rights of human beings. Absent from much of the current literature is a consideration of the other side of this issue. That is, the question of machine rights or the moral standing of these socially situated and interactive technologies. This paper investigates the moral situation of healthcare robots by examining how human beings should respond to these artificial entities that will increasingly come to care for us. A range of possible response s will be considered bounded by two opposing positions. We can, on the one hand, deal with these mechanisms by deploying the standard instrumental theory of technology, which renders care-giving robots nothing more than tools and therefore something we do not really need to care about. Or we can, on the other hand, treat these machines as domestic companions that occupy the place of another “person” in social relationships, becoming someone we increasingly need to care about. Unfortunately neither option is entirely satisfactory, and it is the objective of this paper not to argue for one or the other but to formulate the opportunities and challenges of ethics in the era of robotic caregivers.
International Journal of Social Robotics
This study examined people’s moral judgments and trait perception toward a healthcare agent’s response to a patient who refuses to take medication. A sample of 524 participants was randomly assigned to one of eight vignettes in which the type of healthcare agent (human vs. robot), the use of a health message framing (emphasizing health-losses for not taking vs. health-gains in taking the medication), and the ethical decision (respect the autonomy vs. beneficence/nonmaleficence) were manipulated to investigate their effects on moral judgments (acceptance and responsibility) and traits perception (warmth, competence, trustworthiness). The results indicated that moral acceptance was higher when the agents respected the patient’s autonomy than when the agents prioritized beneficence/nonmaleficence. Moral responsibility and perceived warmth were higher for the human agent than for the robot, and the agent who respected the patient’s autonomy was perceived as warmer, but less competent an...
How can we best identify, understand, and deal with ethical and societal issues raised by healthcare robotics? This paper argues that next to ethical analysis, classic technology assessment, and philosophical speculation we need forms of reflection, dialogue, and experiment that come, quite literally, much closer to innovation practices and contexts of use. The authors discuss a number of ways how to achieve that. Informed by their experience with “embedded” ethics in technical projects and with various tools and methods of responsible research and innovation, the paper identifies “internal” and “external” forms of dialogical research and innovation, reflections on the possibilities and limitations of these forms of ethical–technological innovation, and explores a number of ways how they can be supported by policy at national and supranational level.
Robotics and Autonomous Systems, 2016
How can we best identify, understand, and deal with ethical and societal issues raised by healthcare robotics? This paper argues that next to ethical analysis, classic technology assessment, and philosophical speculation we need forms of reflection, dialogue, and experiment that come, quite literally, much closer to innovation practices and contexts of use. The authors discuss a number of ways how to achieve that. Informed by their experience with ''embedded'' ethics in technical projects and with various tools and methods of responsible research and innovation, the paper identifies ''internal'' and ''external'' forms of dialogical research and innovation, reflections on the possibilities and limitations of these forms of ethical-technological innovation, and explores a number of ways how they can be supported by policy at national and supranational level.
Technology in Society, 2020
Should we deploy social robots in care settings? This question, asked from a policy standpoint, requires that we understand the potential benefits and downsides of deploying social robots in care situations. Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and experienced satisfaction. There are, however, important objections to the use of social robots in care. These include the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive. I propose a framework for evaluating all these arguments in terms of three aspects of care: structure, process, and outcome. I then highlight the main ethical considerations that have to be made in order to untangle the web of pros and cons of social robots in care as these pros and cons are related the trade-offs regarding quantity and quality of care, process and outcome, and objective and subjective outcomes.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
COJ Robotics & Artificial Intelligence, 2021
International Journal of Social Robotics, 2023
Proceedings of the International Conference on Health Informatics, 2015
The Journals of Gerontology: Series B, 2019
International Journal of Social Robotics, 2016
Canadian Journal of Bioethics, 2021
Ethics and Information Technology, 2016
Medicine, Health Care and Philosophy
InfTars (Information Society), 2020