Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Ethical Theory and Moral Practice
…
16 pages
1 file
In today’s highly dynamic societies, moral norms and values are subject to change. Moral change is partly driven by technological developments. For instance, the introduction of robots in elderly care practices requires caregivers to share moral responsibility with a robot (see van Wynsberghe 2013). Since we do not know what elements of morality will change and how they will change (see van der Burg 2003), moral education should aim at fostering what has been called “moral resilience” (Swierstra 2013). We seek to fill two gaps in the existing literature: (i) research on moral education has not paid enough attention to the development of moral resilience; (ii) the very limited literature on moral resilience does not conceptualise moral resilience in relation to new technological developments. We argue that philosophical accounts of moral education need to do justice to the importance of moral resilience, and that a specific form of moral resilience should be conceptualised as “techno...
This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.
This chapter argues that the most interesting question is not whether smart technology is positively or negatively affecting individual moral sensitivity, but whether it is changing its scope. As machine intelligence continues to become more pervasive and naturalistic, do they, in fact, take over much of our everyday moral territory where human sensitivity has traditionally operated alone? Reflection on moral sensitivity has generally assumed that the moral agent is human: sensing, thinking, and acting on morally salient information within the human-affected environment. This chapter argues that there are good reasons to question this assumption in the unfolding digital world, a world increasingly mapped, surveilled, and shaped by context-aware sensors, trackers, AI devices, and information connectivity. Making this case will require that we rethink both technology and moral sensitivity. Doing so may illuminate that we are increasingly living in a world in which the human morally sensitive agent is extended through technology, and joined in its agency by new morally sensitive machines.
Ethics in Progress, 2019
Two major strategies (the top-down and bottom-up strategies) are currently discussed in robot ethics for moral integration. I will argue that both strategies are not sufficient. Instead, I agree with Bertram F. Malle and Matthias Scheutz that robots need to be equipped with moral competence if we don’t want them to be a potential risk in society, causing harm, social problems or conflicts. However, I claim that we should not define moral competence merely as a result of different “elements” or “components” we can randomly change. My suggestion is to follow Georg Lind’s dual aspect dual layer theory of moral self that provides a broader perspective and another vocabulary for the discussion in robot ethics. According to Lind, moral competence is only one aspect of moral behavior that we cannot separate from its second aspect: moral orientation. As a result, the thesis of this paper is that integrating morality into robots has to include moral orientation and moral competence.
2004
Our natural human inclination to seek a moral understanding of the world is one of the oldest cultural narratives handed down since the beginning of recorded history. Much has been written on the ontic nature of moral absolutes and the rhetorical discourse of what makes a good citizen, but neither metaphysics nor politics has been totally prepared for the instantly ubiquitous nature of digital technology and its effect on societies in the contemporary age of the networked world. This paper reflects both a philosophical inquiry into the nature of technical systems through a set of anecdotal observations on civic engagement, moral education and service learning as historically practiced at Saint Mary's College of California. With digital culture making information instantly accessible, unlimitedly dense, and bereft of context, younger generations of learners are finding it increasingly difficult to differentiate between information and knowledge and are easily lured into a relianc...
Moral design and technology, 2022
In this chapter we introduce the central issue in this book: moral design and technology. We will first explore what technology is, how it has always been a main driver for societal change, and why such changes usually elicit new moral challenges. We then focus on an unconstructive distinction between techno-optimists and techno-pessimists, and why there is an urgent need for moral design approaches to overcome this contrast. Finally, we introduce the structure and chapters of this book, and dedicate some thought to inclusive authorship. Key concepts ▶ Technology is 'the use of science in industry, engineering, etc., to invent useful things or to solve problems' ▶ A moral problem can be defined as a situation that requires a value judgement in which more, mutually excluding, value judgements can be right ▶ Societies are transforming into smart societies in which algorithms, artificial intelligence and robotisation are main themes ▶ Techno-optimists emphasise the potential and possibilities of technological innovation; techno-pessimists warn us about the immoral side of new technologies ▶ In a moral design approach we incubate morality as design principles when constructing new technologies Bart Wernaart (ed.) Moral design and technology
2007
The technological advances of contemporary society have outpaced our moral understanding of the problems that they create. How will we deal with profound ecological changes, human cloning, hybrid people, and eroding cyberprivacy, just to name a few issues? In this book, Lorenzo Magnani argues that existing moral constructs often can not be applied to new technology. He proposes an entirely new ethical approach, one that blends epistemology with cognitive science.
Moral design and technology, 2022
Key concepts 1.1 Technology: the driver of revolutions (and moral problems) 1.2 Technology and philosophy 1.3 Towards moral design 1.4 The structure and chapters of this book 1.5 Inclusive authorship References Media Part 1-Moral design process 2. Moral authority Abstract Key concepts 2.1 Moral programming, moral impact and moral authority 2.2 Moral problems 2.3 Moral decision-making and the engineer 2.4 Moral authority and technology 2.5 Conclusions References Legal texts, policy documents and case law 3. Moral impact Abstract Key concepts 3.1 Introduction 3.2 Why we need to think harder about the impact of technology 3.3 The impact of technology on humans from a philosophical viewpoint 3.4 The Technology Impact Cycle Tool References Media Codified standards and case law
Your article is protected by copyright and all rights are held exclusively by Springer Science +Business Media Dordrecht. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".
In this chapter, I ask whether we can coherently conceive of robots as moral agents and as moral patients. I answer both questions negatively but conditionally: for as long as robots lack certain features, they can be neither moral agents nor moral patients. These answers, of course, are not new. They have, yet, recently been the object of sustained critical attention (Coeckelbergh 2014; Gunkel 2014). The novelty of this contribution, then, resides in arriving at these precise answers by way of arguments that avoid these recent challenges. This is achieved by considering the psychological and biological bases of moral practices and arguing that the relevant differences in such bases are sufficient, for the time being, to exclude robots from adopting, both, an active and a passive moral role.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Science and Engineering Ethics, 2019
Philosophy & technology, 2024
AI Ethics, 2020
Ethics and Information Technology, 2018
Journal of Experimental & Theoretical Artificial Intelligence, 2014
Frontiers in Robotics and AI
Science and Engineering Ethics, 2012
Science and Engineering Ethics, 2019
Science and Engineering Ethics
Frontiers in Robotics and AI, 2021
Science and Engineering Ethics
Social Construction in Context
Science and Engineering Ethics, 2019
Human Studies, 2022
Theory, Culture & Society, 2002