Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2018
…
4 pages
1 file
The recent technological developments in robotics and AI bring a greater sense of urgency to the ethical dimension of the future relationships between persons and social robots. To define this relationship, we claim that there is the need to reflect upon the concept of action. First, we will describe von Wright's account of non-causal theory of human action, then we will focus on the actions performed by the so-called social robots. We will point out that the actions of social robots are always planned and predictable while human actions are characterized by creativity. Moreover, spontaneous human actions deal with the spontaneous origin and creation of social institutions.
Ethics of Socially Disruptive Technologies: An Introduction, 2023
Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well as the definition of humanoid robots. A key notion in this context is the idea of anthropomorphism: the human tendency to attribute human qualities, not only to our fellow human beings, but also to parts of nature and to technologies. This tendency to anthropomorphize technologies by responding to and interacting with them as if they have human qualities is one of the reasons why social robots (in particular social robots designed to look and behave like human beings) can be socially disruptive. As is explained in the chapter, while some ethics researchers believe that anthropomorphization is a mistake that can lead to various forms of deception, others — including both ethics researchers and social roboticists — believe it can be useful or fitting to treat robots in anthropomorphizing ways. The chapter explores that disagreement by, among other things, considering recent philosophical debates about whether social robots can be moral patients, that is, whether it can make sense to treat them with moral consideration. Where one stands on this issue will depend either on one’s views about whether social robots can have, imitate, or represent morally relevant properties, or on how people relate to social robots in their interactions with them. Lastly, the chapter urges that the ethics of social robots should explore intercultural perspectives, and highlights some recent research on Ubuntu ethics and social robots.
SSRN Electronic Journal, 2018
This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable subset of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities. JOHN TASIOULAS 50 significant real-world presence when General Motors installed a robot, 'Unimate', in one of its plants to carry out manual tasks-such as welding and spraying-that were deemed too hazardous for human workers. 1 Today, robots are so commonplace in manufacturing that they are a major cause of unemployment in that sector. 2 But the use of robots in factories is only the beginning of a 'robot revolution'-itself part of wider developments powered by the science of Artificial Intelligence (AI)-that has had, or promises to have, transformative effects on all aspects of our lives. Robots are now being used, or being developed for use, in a vast array of settings. Driverless cars have already been invented and are expected to appear on our roads within a decade. These cars have the potential to reduce traffic accidents, which currently claim more than a million lives each year worldwide, by up to 90%, while also reducing pollution and traffic congestion (Bonnefon, Shariff, Rhawan 2006). Robots are also used to perform domestic chores, including vacuuming, ironing, and walking pets. In medicine and social care, robots surpass doctors in diagnosing certain forms of cancer or performing surgery, and they are used in therapy for children with autism or in the care of the elderly. Tutor robots already exist, as do social robots that provide companionship, or even sex. In the business world, AI figures heavily in the stock market, where computers make most decisions automatically, and in the insurance and mortgage industries. Even the recruitment of human workers is turning into a largely automated process, with many rejected job applications never being scrutinized by human eyes. AI-based technology, some of it robotic, also plays a role in the criminal justice system, assisting in policing and decisions on bail, sentencing, and parole. The development of autonomous weapons systems (AWSs), which select and attack military targets without human intervention, promises a new era in military defence. And this is just a sample of recent developments. In this article, I examine some of the key ethical questions posed by robots and AI (or RAIs, as I shall refer to them). The overall challenge, of course, is to harness the benefits of RAIs while responding adequately to the risks we incur in doing so. The need to balance benefit and risk is a recurrent one in the history of technological advance, but RAIs present it in a new and potentially sweeping form with largescale implications for how we live among others-in relation to work, care, education, play, friendship, love-and even regarding how we understand what it is to be a
2019
In this paper, the author proposes a theoretical framework for drawing a line between acceptable and nonacceptable technologies, with a focus on autonomous social robots. The author considers robots as mediations and their ethical acceptance as depending on their impact on the notion of presence. Presence is characterised by networks of reciprocity which make human beings subject and object of actions and perceptions at the same time. Technological mediation can either promote or inhibit the reciprocity of presence. A medium that inhibits presence deserves ethical evaluation since it prevents the possibility of a mutual exchange, thus creating a form of power. Autonomous social robots are a special kind of technological mediation because they replace human presence with a simulation of presence. Therefore, in interactions between human beings and autonomous robots, attention should be paid to the consequences on legal, moral, and social responsibility, and, at the same time, the imp...
Ethics and information technology, 2010
… of the 2008 conference on Tenth …, 2008
Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making autonomous decisions gives us reasons to talk about a system (an artifact) as being "responsible" for a task. No doubt, technology is morally significant for humans, so the "responsibility for a task" with moral consequences could be seen as moral responsibility. Intelligent systems can be seen as parts of socio-technological systems with distributed responsibilities, where responsible (moral) agency is a matter of degree. Knowing that all possible abnormal conditions of a system operation can never be predicted, and no system can ever be tested for all possible situations of its use, the responsibility of a producer is to assure proper functioning of a system under reasonably foreseeable circumstances. Additional safety measures must however be in place in order to mitigate the consequences of an accident. The socio-technological system aimed at assuring a beneficial deployment of intelligent systems has several functional responsibility feedback loops which must function properly: the awareness and procedures for handling of risks and responsibilities on the side of designers, producers, implementers and maintenance personnel as well as the understanding of society at large of the values and dangers of intelligent technology. The basic precondition for developing of this socio-technological control system is education of engineers in ethics and keeping alive the democratic debate on the preferences about future society.
The Political Economy of Communication, 2021
In early March 2021, the National Security Commission on Artificial Intelligence, co-chaired by Eric Schmidt, published a report arguing that the U.S. position on AI lagged behind that of China and Russia. A number of initiatives were proposed to remedy the situation. This development followed controversies surrounding Google's letting go of two prominent and respected ethicists. Meanwhile, robot police dogs were trialed in New York City's Bronx borough, and the global pandemic forced higher education to reckon with its own future amidst an already-dire student loan crisis. Into this environment, Frank Pasquale offers New Laws of Robotics, following his important The Black Box Society. In the earlier work, Pasquale made a compelling and chilling clarion call against the opaque processes of data collection and interpretive judgment that have encroached on social life. The present book builds upon the previous effort to advance a cohesive optimistic program, although it begs deeper theorization of the object being addressed. If the 'robot question' in the 1960s focused on the automation of manufacturing jobs, now "the computerization of services is top of mind" (197). Economists dominate this debate, rendering it largely in terms of cost-benefit analysis that "prioritizes capital accumulation over the cultivation and development of democratically governed communities of expertise and practice" (197). Indeed, "Conversations about robots usually tend toward the utopian ('machines will do all the dirty, dangerous, or difficult work') or the dystopian ('and all the rest besides, creating mass unemployment'). But the future of automation in the workplace-and well beyond-will hinge on millions of small decisions about how to develop AI" (14). To this end, he proposes four 'laws' for artificial intelligence: 'complementarity,' 'authenticity,' 'cooperation,' and 'attribution.' "A humane agenda for automation," he argues, "would prioritize innovations that complement workers in jobs that are, or ought to be, fulfilling vocations" (4). Pointing to the emergence of chatbots, appointment-assistants, and more, he suggests that "robotic systems and AI should not counterfeit humanity" (7). With an eye in particular toward military conflicts and automated, 'smart' policing, "Robotic systems and AI should not intensify zero-sum arms races" (9).
Információs Társadalom, 2018
The paper seeks to analyze the new ethical dilemmas that arise in the social contexts of the robot world. It is based on the theoretical foundation of the ontology of Nicolai Hartmann, which finds the place of ever-increasing artificial intelligence in reality among the layers of being. From this starting point, it examines the summative studies of the robotics analysis already developed in English and looks at their correction that needs to be made in the theory of four-layered human existence in comparison with the analyzes so far. Human existence and the life of human communities are based on the cumulative regularities of the layers of being that are built upon each other through evolution, according to the theses of Nicolai Hartmann's ontology (Hartmann, 1962). The accelerated development and increasing use of artificial intelligence (AI) in recent years in this structure directly affects the top layer of the four (physical, biological, spiritual and intellectual) layers of being, increasing its strength to the detriment of the lower ones. And with the later development of artificial intelligence, eventually breaking away from human control and gaining independence, it can be perceived as an evolutionarily created new layer of being. Unlike the three previous evolutionary leaps, however, it would not require all the lower layers of being. Taking into account the robots that are the physical incarnations of AI today, AI only needs the physical layer of being. (Pokol, 2017). Against this theoretical backdrop, the analyses in this study seek to explore the emerging moral and related legal dilemmas within the mechanisms of contemporary societies that are increasingly permeated by artificial intelligence, while at the same time considering the extent to which the analytical framework changes when the multi-layered nature of human lives, and thus society, is constantly kept in mind.
Frontiers in Robotics and AI
It is almost a foregone conclusion that robots cannot be morally responsible agents, both because they lack traditional features of moral agency like consciousness, intentionality, or empathy and because of the apparent senselessness of holding them accountable. Moreover, although some theorists include them in the moral community as moral patients, on the Strawsonian picture of moral community as requiring moral responsibility, robots are typically excluded from membership. By looking closely at our actual moral responsibility practices, however, I determine that the agency reflected and cultivated by them is limited to the kind of moral agency of which some robots are capable, not the philosophically demanding sort behind the traditional view. Hence, moral rule-abiding robots (if feasible) can be sufficiently morally responsible and thus moral community members, despite certain deficits. Alternative accountability structures could address these deficits, which I argue ought to be ...
IEEE Technology and Society Magazine
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Science and Engineering Ethics, 2019
International Journal of Social Robotics, 2021
Choice Reviews Online, 2012
Frontiers in Computer Science, 2022
AI and Ethics, 2024
The Southern Journal of Philosophy, 2022
Philosophy & Technology, 2019
Journal of Healthcare Engineering, 2021
Minds and Machines, 2021
Studies in Logic, Grammar and Rhetoric, 2020
Information
Frontiers in Robotics and AI, 2022
Ethics of AI and Robotics from a Non-Antropomorphic and Non-Zoomorphic Perspective , 2023
Studies in the Philosophy of Sociality, 2017