Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021
…
35 pages
1 file
We can identify two different models for the development of artificial intelligence. In the first, AIs are complex data-processing or patternrecognizing tools, lacking what we would describe as "human-like" intelligence. In this model, the ostensible reason for the lack of a specifically human-like intelligence is that the AIs are either not embodied or are "housed" in but not integrated with a body or body-like structure. In this model, the machine is the seat of the rational and intellectual, and emotion, insight, and intuition remain the product of the body. In the second model, intelligence is the result of integration of a "thinking" technology into a body of some sort, resulting in a recognizably human-like mind or intelligence. Ann Leckie's portrayal of artificial intelligence is more in line with this second model. Using characters in which AIs are integrated into biological bodies and then into technological ones, she provides us with a more complex and forward looking picture of how AIs might come to have emotional experience, intuition, and flexibility of thought and action, a model that bears similarities to both the cyborgs of science fiction and the second of the two AI models above. Her nuanced portrait of such a being can be taken as a possible end-goal for the creation of a more organismal, less constrained artificial mind.
PhaenEx, 2006
A. M. Turing argued that there was "little point in trying to make a 'thinking machine' more human by dressing it up in ... artificial flesh." We should, instead, draw "a fairly sharp line between the physical and the intellectual capacities of a man" (434). For over fifty years, drawing this line has meant disregarding the role flesh plays in our intellectual capacities. Correspondingly, intelligence has been defined in terms of the algorithms that both men and machines can perform. I would like to raise some doubts about this paradigm in artificial intelligence research. Intelligence, I believe, does not just involve the working of algorithms. It is founded on flesh's ability to move itself, to feel itself, and to engage in the body projects that accompanied our learning a language. This implies that to copy intelligence-i.e., to produce an artificial version of it-the flesh that forms its basis must also be reproduced. I. I shall begin with some general remarks about the role of the body in our awareness of the world. This awareness has a "first-person character." It is always awareness from a particular point of view-a "here" that no one else shares. Behind this is the fact of embodiment, i.e., the fact that different bodies cannot occupy the same space and, hence, cannot share the same "here." Embodiment also underlies the particular foreground-background structure of experience. The view I have of some object is surrounded by a horizon or connected series of views that I
2006
Artificialities: From Artificial Intelligence to Artificial Culture <br> Subjectivity, Embodiment and Technology in Contemporary Speculative Texts This thesis is an examination of the articulation, construction and representation of 'the artificial' in contemporary speculative texts in relation to notions of identity, subjectivity and embodiment. Conventionally defined, the artificial marks objects and spaces which are outside of the natural order and thus also beyond the realm of subjectivity, and yet they are simultaneously produced and constructed by human ideas, labor and often technologies. Artificialities thus act as a boundary point against which subjectivity is often measured, even though that border is clearly drawn and re-drawn by human hands. Paradoxically, the artificial is, at times, also deployed to mark a realm where minds and bodies are separable, ostensibly devaluing the importance of embodiment. Speculative texts, which include science fiction and sim...
Lecture Notes in Business Information Processing
The main challenge of technology is to facilitate the tasks and to transfer the functions that are usually performed by the humans to the nonhumans. However, the pervasion of machines in everyday life requires that the non-humans are increasingly closer in their abilities to the ordinary thought, action and behaviour of the humans. This view merges the idea of the Humaniter, a longstanding myth in the history of technology: an artificial creature that thinks, acts and feels like a human to the point that one cannot make the difference between the two. In the wake of the opposition of Strong AI and Weak AI, this challenge can be expressed in terms of a shift from the performance of intelligence (reason, reasoning, cognition, judgment) to that of sentience (experience, sensation, emotion, consciousness). In other words, the challenge of technology if this possible shift is taken seriously is to move from the paradigm of Artificial Intelligence (AI) to that of Artificial Sentience (AS). But for the Humaniter not to be regarded as a mere myth, any intelligent or sentient machine must pass through a Test of Humanity that refers to or that differs from the Turing Test. One can suggest several options for this kind of test and also point out some conditions and limits to the very idea of the Humaniter as an artificial human.
Prometeica - Revista de Filosofía y Ciencias
In this paper, I explore the fruitful relationship between science fiction and philosophy regarding the topic of artificial intelligence. I establish a connection between certain paradigms in the philosophy of mind and consciousness and the imagination of possible future scenarios in sci-fi, especially focusing on the different ways of conceiving the role of corporeality in constituting consciousness and cognition. Then, I establish a parallelism between these different conceptions of corporeality in the philosophy of mind and certain representations of AI in sci-fi: from computers to robots and androids. I conclude by stressing the value of exchanging ideas between sci-fi and philosophy to foreshadow and evaluate some scenarios of high ethical relevance.
Forum Philosophicum, 2019
The idea of artificial intelligence implies the existence of a form of intelligence that is "natural, " or at least not artificial. The problem is that intelligence , whether "natural" or "artificial," is not well defined: it is hard to say what, exactly, is or constitutes intelligence. This difficulty makes it impossible to measure human intelligence against artificial intelligence on a unique scale. It does not, however, prevent us from comparing them; rather, it changes the sense and meaning of such comparisons. Comparing artificial intelligence with human intelligence could allow us to understand both forms better. This paper thus aims to compare and distinguish these two forms of intelligence, focusing on three issues: forms of embodiment, autonomy and judgment. Doing so, I argue, should enable us to have a better view of the promises and limitations of present-day artificial intelligence, along with its benefits and dangers and the place we should make for it in our culture and society.
Pro-Fil, 2021
In the seventy years since AI became a field of study, the theoretical work of philosophers has played increasingly important roles in understanding many aspects of the AI project, from the metaphysics of mind and what kinds of systems can or cannot implement them, the epistemology of objectivity and algorithmic bias, the ethics of automation, drones, and specific implementations of AI, as well as analyses of AI embedded in social contexts (for example). Serious scholarship in AI ethics sometimes quotes Asimov's speculative laws of robotics as if they were genuine proposals, and yet Lem remains historically undervalued as a theorist who uses fiction as his vehicle. Here, I argue that Lem's fiction (in particular his fiction about robots) is overlooked but highly nuanced philosophy of AI, and that we should recognize the lessons he tried to offer us, which focused on the human and social failures rather than technological breakdowns. Stories like "How the World Was Saved" and "Upside Down Evolution" ask serious philosophical questions about AI metaphysics and ethics, and offer insightful answers that deserve more attention. Highlighting some of this work from The Cyberiad and the stories in Mortal Engines in particular, I argue that the time has never been more appropriate to attend to his philosophy in light of the widespread technological and social failures brought about by the quest for artificial intelligence. In service of this argument, I discuss some of the history and philosophical debates around AI in the last decades, as well as contemporary events that illustrate Lem's strongest claims in critique of the human side of AI.
Imprint Academic, 2020
Becoming Artificial is a collection of essays about the nature of humanity, technology, artifice, and the irreducible connections between them. Is there something fundamental to being human or are humans simply biological computers?
AI and the Human Body, 2024
In examining the human experience, we find it shaped by three interwoven dimensions: the physical, mental, and emotional bodies. The physical body roots us in sensory reality, allowing direct interaction with the external world and grounding our presence. The mental body is the realm of thought, reason, and consciousness, where we construct meaning and engage with abstract ideas, framing our experiences in coherent narratives. Finally, the emotional body encompasses the subtleties of feeling, intuition, and relational awareness, infusing our
This paper begins by focusing on the recent work of David Gelernter on artificial intelligence (AI), in which he argues against ‘computationalism’ – that conception of the mind which restricts it to functions of abstract reasoning and calculation. Such a notion of the human mind, he argues, is overly narrow, because the ‘tides of mind’ cover a larger and more variegated ‘spectrum’ than computationalism allows. The argument of Hubert Dreyfus is examined, that the AI research community concentrate its efforts on replacing its cognitivist approach with a Heideggerian one, a recognition that AI research cannot ignore the ‘embeddedness’ of human intelligence in a world, nor its ‘embodiment’. However, Gelernter and Dreyfus do not go far enough in their critique of AI research: what is truly human is not just a certain kind of intelligence; it is the capacity for ‘care’ and desire in the face of mortality, which no machine can simulate.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Philosophy in Review 26 (6), 420-422, 2006
Intersections: A Journal of Literary and Cultural Studies, 2024
SN Social Sciences, 2021
Ethnographic Studies, 2019
2024
Self-published, 2023
Neural Network World
Philosophical Problems of IT & Cyberspace (PhilIT&C), 2023
RUSSIAN JOURNAL OF PHILOSOPHICAL SCIENCES (FILOSOFSKIE NAUKI), 2022
Springer (Science and Fiction series), 2021