Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
105 pages
1 file
Abstract. The principle of formal equality, one of the most fundamental and undisputed principles in ethics, states that a difference in treatment or value between two kinds of entities can only be justified on the basis of a relevant and significant difference between the two.
2003
In this paper I argue for an ethics of machines. In arguing for an ethics of machines I am not only arguing for the consideration of the ethical implications of machines (which we already do) but also, and more importantly, for an ethics of machines qua machines, as such. Thus, I attempt to argue for a decentering of ethics, urging us to move beyond any centre, whatever it may be-anthropological, biological, etc. I argue that if we take ethics seriously we must admit that our only measure cannot be that of man. To develop the argument I use an episode in Star Trek where the fate of the highly sophisticated android Commander Data is to be decided. I show how the moral reasoning about Data remains anthropocentric but with some attempt to reach beyond it. I proceed to use the work of Heidegger and Levinas to suggest a possible way to think (and do) a decentered ethics.
IEEE Transactions on Affective Computing, 2012
Immanuel Kant is one of the giants of moral theorizing in the western philosophical tradition. He developed a view of moral imperatives and duty that continues to inspire thought up to the present. In a thought-provoking series of papers, Anthony Beavers argues that Kant's conception of morality will not be applicable to machines. In other words, it will turn out that when we design machines at a level of sophistication such that ethical constraints must be built into their behavior, Kant's understanding of morality will not be helpful. Specifically, the notion of duty as involving some sort of internal conflict can be jettisoned. The argument in this paper is that there are aspects of duty that can be preserved for machine ethics. The goal will not be to defend any of the details of Kant's position. Rather, it is to motivate some ways of thinking about duty that may be useful for machine ethics.
This paper responds to the machine question in the affirmative, arguing that machines, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in three parts. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. And in the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is considered necessary to be considered a moral agent or patient but because the standard characterization of agency and patiency already fail to accommodate not just machines but also those entities who are currently regarded as being moral subjects. The third part responds to this systemic failure by formulating an approach to ethics that is oriented and situated otherwise. This alternative proposes an ethics that is based not on some prior discovery concerning the ontological status of others but the product of a decision that responds to and is able to be responsible for others and other kinds of otherness.
Philosophy & Technology, 2013
When it comes to the question of what kind of moral claim an intelligent or autonomous machine might have, one way to answer this is by way of comparison with humans: Is there a fundamental difference between humans and other entities? If so, on what basis, and what are the implications for science and ethics? This question is inherently imprecise, however, because it presupposes that we can readily determine what it means for two types of entities to be sufficiently different-what I will refer to as being "discontinuous". In this paper, I will sketch a formal characterization of what it means for types of entities to be unique with regard to each other. This expands upon Bruce Mazlish's initial formulation of what he terms a continuity between humans and machines, Alan Turing's epistemological approach to the question of machine intelligence, and Sigmund Freud's notion of scientific revolutions dealing blows to the self-esteem of mankind. I will discuss on what basis we should regard entities as (dis-)continuous, the corresponding moral and scientific implications, as well as an important difference between what I term downgrading and upgrading continuities-a dramatic difference in how two previously discontinuous types of entities might become continuous. All of this will be phrased in terms of which scientific levels of explanation we need to presuppose, in principle or in practice, when we seek to explain a given type of entity. The ultimate purpose is to provide a framework that defines which questions we need to ask if we argue that two types of entities ought (not) to be explained (hence treated) in the same manner, as well as what it takes to reconsider scientific and ethical hierarchies imposed on the natural and artificial world.
Machine Ethics
This theoretical concept of a machine as a pattern of operations which could be implemented in a number of ways is called a virtual machine. In modern computer technology, virtual machines abound. Successive versions of processor chips re-implement the virtual machines of their predecessors, so that the old software will still run. Operating systems (e.g. Windows) offer virtual machines to applications programs. Web browsers offer several virtual machines (notably Java) to the writers of Web pages. More importantly, any program running on a computer is a virtual machine. Usage in this sense is a slight extension of that in computer science, where the "machine" in "virtual machine" refers to a computer, specifically an instruction set processor. Strictly speaking, computer scientists should refer to "virtual processors", but they tend to refer to processors as "machines" anyway. For the purposes of our discussion here, we can call any program a virtual machine. In fact, I will drop the "virtual" and call programs simply "machines". The essence of a machine, for our purposes, is its behavior; what it does given what it senses (always assuming that there is a physical realization capable of actually doing the actions). To understand just how complex the issue really is, let's consider a huge, complex, immensely powerful machine we've already built. The machine is the U.S. Government and legal system. It is a lot more like a giant computer program than people realize. Really complex
Philosophy & Technology, vol. 27, no. 1, 2014
This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is considered necessary and sufficient to be a moral agent or patient but because the characterization of agency and patiency already fail to accommodate others. The third and fourth parts respond to this problem by considering two recent alternatives—the all-encompassing ontocentric approach of Luciano Floridi’s information ethics and Emmanuel Levinas’s eccentric ethics of otherness. Both alternatives, despite considerable promise to reconfigure the scope of moral thinking by addressing previously excluded others, like the machine, also fail but for other reasons. Consequently, the essay concludes not by accommodating the alterity of the machine to the requirements of moral philosophy but by questioning the systemic limitations of moral reasoning, requiring not just an extension of rights to machines but a thorough examination of the way moral standing has been configured in the first place.
Tally, Robert T., ed. Kurt Vonnegut: Critical Insights. Ipswich, CT: Salem Press, 2013. 248-68.
Are men smarter than machines? Can a machine be a gentleman? Are men more of machines than machines themselves? Such questions make sense in the framework of late-twentieth-century human-versus-machine dichotomy, which is explored in a number of Vonnegut’s writings. For instance, the title “hero” of the 1950 short story “EPICAC,” one of Vonnegut’s first publications, is a fictional heavy-duty supercomputer whose qualities or “personality” bring the classical hierarchy of human over machine into question. While machines with superhuman physical capabilities are pervasive in science fiction stories, computers morally superseding human are less easy to find. In this chapter, I review the human/machine relationship apparent in the short story, in Player Piano (which includes a computer named EPICAC XIV), and other works. This examination draws upon the British mathematician-cryptographer Alan Turing’s ideas, especially those in his landmark paper, “Computing Machinery and Intelligence.” I consider how Turing’s concerns about machine intelligence are embodied in Vonnegut’s work, and discuss a possible world where these concerns have become reality.
Intercultural Relations, 2019
In a society based on technology, the human being loses their centrality and triggers the fourth revolution by means of scientific advancement and digital progress: that of the rupture of anthropocentrism, of industry 4.0 and of the infosphere. The scientific and academic debate must focus its attention, among various elements, on the formulation of new ethical principles that can guide a person in their interaction, interconnection and, in some cases, “fusion” with the “machine” and its accompanying values. The advent of artificial intelligences is producing changes in the management of common liberties, of private and public life, of the individual and of the community, which increasingly seek in the “artificialisation” of the self and in their relationship with machines, places, subjects, reflections of interaction with each other and with the other self. The sophistication of technology and, therefore, of reality indicate the need to rethink the relationship between the tangibil...
Discussing Borders, Escaping Traps: Transdisciplinary and Transspatial Approaches, 2019
Man is a peculiar being, but what are the boundaries between human beings and machines? Many attempts at demarcating humanity, and by that identifying what makes us special, have been made throughout history. These debates are important, because they have implications for questions of both morality and politics. Even more so, it is important today because artificial beings now imitate just about all facets of humanity. This chapter examines various candidates for criteria of demarcation, such as reason, understanding, emotions, etc., and evaluates their merit. In this process, old debates of a similar character are briefly examined – debates regarding the question of what sets man apart from animals.
Southern Journal of Philosophy, 2022
Proceedings of the IEEE, 2019
BioLaw Journal, 2018
Ethics and Information Technology, 2015
Proceedings of the IEEE, 2019
AAAI & ACM Conference on Artificial Intelligence, Ethics and Society 2018, 2018
Social Science Research Network, 2016
Ethics in Progress, 2019
Proceedings of the ... International Florida Artificial Intelligence Research Society Conference, 2022
IEEE Robotics & Automation Magazine, 2011