Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
IEEE Technology and Society Magazine
…
3 pages
1 file
AI-generated Abstract
This guest editorial discusses the current landscape of robotics, emphasizing both the capabilities and limitations of modern robots in various sectors, from industrial applications to AI-driven software bots. It highlights the historical context of technological promise versus reality, the socio-ethical implications of advanced robotics, including military applications and the impact on employment, as well as the need for interdisciplinary research to aid policymakers in navigating these challenges.
The Political Economy of Communication, 2021
In early March 2021, the National Security Commission on Artificial Intelligence, co-chaired by Eric Schmidt, published a report arguing that the U.S. position on AI lagged behind that of China and Russia. A number of initiatives were proposed to remedy the situation. This development followed controversies surrounding Google's letting go of two prominent and respected ethicists. Meanwhile, robot police dogs were trialed in New York City's Bronx borough, and the global pandemic forced higher education to reckon with its own future amidst an already-dire student loan crisis. Into this environment, Frank Pasquale offers New Laws of Robotics, following his important The Black Box Society. In the earlier work, Pasquale made a compelling and chilling clarion call against the opaque processes of data collection and interpretive judgment that have encroached on social life. The present book builds upon the previous effort to advance a cohesive optimistic program, although it begs deeper theorization of the object being addressed. If the 'robot question' in the 1960s focused on the automation of manufacturing jobs, now "the computerization of services is top of mind" (197). Economists dominate this debate, rendering it largely in terms of cost-benefit analysis that "prioritizes capital accumulation over the cultivation and development of democratically governed communities of expertise and practice" (197). Indeed, "Conversations about robots usually tend toward the utopian ('machines will do all the dirty, dangerous, or difficult work') or the dystopian ('and all the rest besides, creating mass unemployment'). But the future of automation in the workplace-and well beyond-will hinge on millions of small decisions about how to develop AI" (14). To this end, he proposes four 'laws' for artificial intelligence: 'complementarity,' 'authenticity,' 'cooperation,' and 'attribution.' "A humane agenda for automation," he argues, "would prioritize innovations that complement workers in jobs that are, or ought to be, fulfilling vocations" (4). Pointing to the emergence of chatbots, appointment-assistants, and more, he suggests that "robotic systems and AI should not counterfeit humanity" (7). With an eye in particular toward military conflicts and automated, 'smart' policing, "Robotic systems and AI should not intensify zero-sum arms races" (9).
2021
This introduction to the volume gives an overview of foundational issues in AI and robotics, looking into AI’s computational basis, brain–AI comparisons, and conflicting positions on AI and consciousness. AI and robotics are changing the future of society in areas such as work, education, industry, farming, and mobility, as well as services like banking. Another important concern addressed in this volume are the impacts of AI and robotics on poor people and on inequality. These implications are being reviewed, including how to respond to challenges and how to build on the opportunities afforded by AI and robotics. An important area of new risks is robotics and AI implications for militarized conflicts. Throughout this introductory chapter and in the volume, AI/robot-human interactions, as well as the ethical and religious implications, are considered. Approaches for fruitfully managing the coexistence of humans and robots are evaluated. New forms of regulating AI and robotics are ca...
2022
The document is a comprehensive paper on Robotics, Artificial Intelligence (AI), and Ethics, co-authored by a human, Glyn Hnutu-healh, and an AI, Ranagonda. It covers the following key areas: Robotics: The paper discusses the design, construction, operation, and utilization of robots, emphasizing the need for human education to keep pace with technological advancements. It details robot components, locomotion methods, and human-robot interaction factors, highlighting the importance of understanding robot autonomy levels and the need for laws to govern robot behavior. Artificial Intelligence (AI): AI is defined as the development of computer systems capable of performing tasks that typically require human intelligence. The paper outlines AI goals, tools, and philosophical considerations, including machine consciousness and ethical concerns. It stresses the importance of AI augmenting human intelligence rather than replacing it and discusses the potential impact of AI on business, work, healthcare, and education. Ethics of AI and Robotics: The paper proposes ethical guidelines for both humans and machines, including the "Eight Laws" of robotics, which expand on Asimov's original Three Laws. It addresses ethical challenges such as biases, robot rights, and the weaponization of AI. The document emphasizes the need for global cooperation, values-based systems, and prioritizing human well-being in the development and deployment of AI and robots. The paper concludes with a call for careful consideration of the future integration of AI and robotics into human society, highlighting both the potential benefits and risks.
Connection Science
The EPSRC principles of robotics make a number of commitments about the ontological status of robots such as that robots are Òjust toolsÓ or can give only Òan impression or real intelligenceÓ. This commentary proposes that this assumes, all too easily, that we know the boundary conditions of future robotics development, and argues that progress towards a more useful set of principles could begin by thinking carefully about the ontological status of robots. Whilst most robots are currently little more than tools, we are entering an era where there will be new kinds of entities that combine some of the properties of tools with psychological capacities that we had previously thought were reserved for complex biological organisms such as humans. The ontological status of robots might be best described as liminalÑneither living nor simply mechanical. There is also evidence that people will treat robots as more than just tools regardless of the extent to which their machine nature is transparent. Ethical principles need to be developed that recognize these ontological and psychological issues around the nature of robots and how they are perceived.
idt.mdh.se
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.
mass produce Digit, their humanoid robot (Kolodny 2023). In a New York Times article commenting on the opening of the factory of Agility Robotics, there is a sentence: "the long-anticipated robot revolution has begun" (Howe and Antaya 2023). As we can see from the selection of the recent news, humanoid robots might soon populate our world. At the same time, the ethical literature about human-like robots is full of concerns and underlying risks of having such robots around. For example, Alsegier points out that human-like robots are "one of the most controversial facets of modern technology" (Alsegier 2016: 24). Russell notes that "there is no good reason for robots to have humanoid form. There are also good, practical reasons not to have humanoid form" (Russell 2019: 126). Darling, in her book about robots, points out, "The main problem of anthropomorphism in robotics is that, right now, we aren't treating it as a matter of contention" (Darling 2021: 155). She believes that there is not enough discussion about this and that we are deploying robots without fully understanding the impact of anthropomorphism on people. The more human-like the robot is, the easier it is to anthropomorphize it (Gasser 2021: 334). The fact that we have already got used to robotic vacuum cleaners, smart speakers, and delivery robots does not mean that the natural step is to accept human-like robots. Humanoid robots, with their human likeness, bring additional ethically relevant issues that should be discussed first. In this book, I focus on the ethically qualitative shift in designing robots that resemble humans. Throughout the book many questions will be asked that relate to ethical issues of human likeness; among these are: Is it safe to have human-like robots around us? Who would human-like robots represent, and why should it be a matter of concern? To what extent are human robots achievable? Is it ethical to have too human-like robots? Could robots have human-like ethics? Could robots be responsible in a human-like way? How should we treat robots that look like us? How should we treat robots that are like us? How do we mitigate the risks resulting from human-like robots? All these questions will be covered in the chapters that follow. In recent years, numerous books with a focus on the ethical aspects of robots have been published. Besides the already mentioned book by Kate Darling, there are many other great books published (e.g.
Springer Handbook of Robotics, 2008
AI & Society, 2018
Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots "ethical" and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly advanced yet artificially intelligent beings will deserve moral protection (in the form of being granted moral rights) once they become capable of moral reasoning and decision-making. I argue that we are obligated to grant them moral rights once they have become full ethical agents, i.e., subjects of morality. I present four related arguments in support of this claim and thereafter examine four main objections to the idea of ascribing moral rights to artificial intelligent robots.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Robotics & Automation Journal, 2017
Journal of Practical Ethics, 2019
Social and cultural studies of robots and AI, 2022
Artificial Intelligence, 2011
J. von Braun et al. (eds.), Robotics, AI, and Humanity, 2021
Proceedings of the 7th International Conference on Human-Agent Interaction - HAI '19, 2019
Choice Reviews Online, 2012
International Journal of Social Robotics, 2011
Science and Engineering Ethics, 2019
ACM Transactions on Human-Robot Interaction, 2018
Ethics of Socially Disruptive Technologies: An Introduction, 2023
Technology and Culture
Essays in Philosophy, 2014