Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
8 pages
1 file
This article seeks to illuminate ways in which interality studies provides unique philosophical ways to engage Artificial Intelligence research and its implications. Rather than rejecting AI methodologies wholesale as replications of classic Cartesian dualism or embracing them via rationalist philosophical models, the article forwards an interological illumination of " machines " , " learning " , and human being-in-the-world in order to make sense of the " human-like " in AI. Although the algorithmic models of machine learning are computational, the intelligibility of their outcomes—like human knowing-how—cannot always be articulated by AI researchers. Rather than a problem to be overcome, the article suggests that herein lies the trace to be followed in future interalogical work done in forging a third, synergistic, way forward in human machine relations. [Reno Lauro. AI and the Posthuman (Mental) Ecology: Interological Illuminations. China Media Research 2017; 13(4): 70-76]. 8
Philosophy Papers (PhilPapers), 2024
This paper examines the ontological and epistemological implications of artificial intelligence (AI) through posthumanist philosophy, integrating the works of Deleuze, Foucault, and Haraway with contemporary computational methodologies. It introduces concepts such as negative augmentation, praxes of revealing, and desedimentation, while extending ideas like affirmative cartographies, ethics of alterity, and planes of immanence to critique anthropocentric assumptions about identity, cognition, and agency. By redefining AI systems as dynamic assemblages emerging through networks of interaction and co-creation, the paper challenges traditional dichotomies such as human versus machine and subject versus object. Bridging analytic and continental philosophical traditions, the analysis unites formal tools like attribution analysis and causal reasoning with the interpretive and processual methodologies of continental thought. This synthesis deepens the understanding of AI's epistemic and ethical dimensions, expanding philosophical inquiry while critiquing anthropocentrism in AI design. The paper interrogates the spatial foundations of AI, contrasting Euclidean and non-Euclidean frameworks to examine how optimization processes and adversarial generative models shape computational epistemologies. Critiquing the reliance on Euclidean spatial assumptions, it positions alternative geometries as tools for modeling complex, recursive relationships. Furthermore, the paper addresses the political dimensions of AI, emphasizing its entanglements with ecological, technological, and sociopolitical systems that perpetuate inequality. Through a politics of affirmation and intersectional approaches, it advocates for inclusive frameworks that prioritize marginalized perspectives. The concept of computational qualia is also explored, highlighting how subjective-like dynamics emerge within AI systems and their implications for ethics, transparency, and machine perception. Finally, paper calls for a posthumanist framework in AI ethics and safety, emphasizing interconnectivity, plurality, and the transformative capacities of machine intelligence. This approach advances epistemic pluralism and reimagines the boundaries of intelligence in the digital age, fostering novel ontological possibilities through the co-creation of dynamic systems.
This anticipatory analysis of the robot in contemporary culture poses the question concerning technology as a primarily cultural and ethical question. In a reading of Carlos Collodi’s The Adventures of Pinocchio (1983), Isaac Asimov’s I, Robot (1950), Riddly Scott’s Blade Runner (1982), Chris Cunningham’s All is Full of Love (1999), and Alex Proyas’ film I, Robot (2004), I will frame the robot as an intermedial key figure that signifies the “divorce” of technics and culture as theorized in the works of Bernard Stiegler, most notably Technics and Time: the Fault of Epimetheus (1994). This reading of the robot as a cultural and technical object is based on Heidegger’s and Stiegler’s revision of the Aristotelian division between natural and technical beings. In The Question Concerning Technology (1962), Heidegger traces our conception of instrumentality back to Aristotle’s four causes, and calls into question the primacy that is given to the efficient cause – the cause that brings about the effect, in the case of the technical object usually understood as the manufacturer – throughout the history of philosophy. In Technics and Time, Stiegler takes this argument a step further, and theorizes the technical object as having a distinct dynamics and evolution of its own. This analysis aims to raise the question of how, in an age of constant innovation, the future is being transmitted to us by the technical object, and through the medium.
Open Access Journal of Data Science and Artificial Intelligence , 2024
This article focuses on the interaction between man and machine, AI specifically, to analyse how these systems are slowly taking over roles that hitherto were thought ‘only’ for humans. More recent, as AI has stepped up in ability to learn without supervision, to recognize patterns, and to solve problems, it adopted characteristics like creativity, novelty, intentionality. These events take one to the heart of what it is to be human, and the emerging definitions of self that are increasingly central to post humanist discourses. The discussion in these two threads is in philosophy of AI and is concerned with issues of consciousness, intentionality and creativity. Al as a result of causing a shift in the current anthropocentric perceptions resulted in portrayal of humans as special beings. Secondly, this exploration responds to important questions related to AI application, such as ethical, social, and existential ones. The article emphasizes a necessity to define the role of the advent of AI and its influence on the interaction between people and technology as well as the role of the social individuality in the wake of intelligent machines that mimic thinking and creativity. It seeks to prompt more specific analysis of how or why AI reduces the differences between artificial and human intelligence or increases the prospects for options expanding the notion of consciousness beyond the human-centric one.
This paper begins by focusing on the recent work of David Gelernter on artificial intelligence (AI), in which he argues against ‘computationalism’ – that conception of the mind which restricts it to functions of abstract reasoning and calculation. Such a notion of the human mind, he argues, is overly narrow, because the ‘tides of mind’ cover a larger and more variegated ‘spectrum’ than computationalism allows. The argument of Hubert Dreyfus is examined, that the AI research community concentrate its efforts on replacing its cognitivist approach with a Heideggerian one, a recognition that AI research cannot ignore the ‘embeddedness’ of human intelligence in a world, nor its ‘embodiment’. However, Gelernter and Dreyfus do not go far enough in their critique of AI research: what is truly human is not just a certain kind of intelligence; it is the capacity for ‘care’ and desire in the face of mortality, which no machine can simulate.
Acta Infologica, 2023
Compared to natural intelligence, artificial intelligence produces a specific epistemology, ontology, and, most importantly, ethical framework. In this article, I will primarily address the necessity of this framework in two parts. The first chapter will explore the issue of recognition through the lens of the body, boundaries, and differences. Here, I will delve into the reasons why humanism privileges certain perspectives, critique humanism itself, and present arguments for why subalternity is a viable alternative for the existence of AI. In the second part, I will examine how the pursuit of subaltern rights and definitions in the face of exploitation involving artificial intelligence can lead to the liberation of AI, cyborgs, humans, and robots AI simultaneously. This chapter aims to move beyond regarding artificial intelligence merely as a tool for data processing and instead explores the potential for autonomous existence within it. Ultimately, it seeks to establish a connection between the death of the developer and the emergence of the AI as subaltern ontologies. The primary objective of this article is to challenge the notion of human absoluteness and uniqueness in its evolution, and to define "AI" as a subject that encompasses inter-human and post-human plural epistemological, ethical, and ontological possibilities.
The meaning of AI has undergone drastic changes during the last 60 years of AI discourse(s). What we talk about when saying “AI” is not what it meant in 1958, when John McCarthy, Marvin Minsky and their colleagues started using the term. Take game design as an example: When the Unreal game engine introduced "AI" in 1999, they were mainly talking about pathfinding. For Epic Megagames, the producers of Unreal, an AI was just a bot or monster whose pathfinding capabilities had been programmed in a few lines of code to escape an enemy. This is not "intelligence" in the Minskyan understanding of the word (and even less what Alan Turing had in mind when he designed the Turing test). There are also attempts to differentiate between AI, classical AI and "Computational Intelligence" (Al-Jobouri 2017). The latter is labelled CI and is used to describe processes such as player affective modelling, co-evolution, automatically generated procedural environments, etc. Artificial intelligence research has been commonly conceptualised as an attempt to reduce the complexity of human thinking. (cf. Varela 1988: 359-75) The idea was to map the human brain onto a machine for symbol manipulation – the computer. (Minsky 1952; Simon 1996; Hayles 1999) Already in the early days of what we now call “AI research” McCulloch and Pitts commented on human intelligence and proposed in 1943 that the networking of neurons could be used for pattern recognition purposes (McCulloch/Pitts 1943). Trying to implement cerebral processes on digital computers was the method of choice for the pioneers of artificial intelligence research. The “New AI” is no longer concerned with the needs to observe the congruencies or limitations of being compatible with the biological nature of human intelligence: “Old AI crucially depended on the functionalist assumption that intelligent systems, brains or computers, carry out some Turing-equivalent serial symbol processing, and that the symbols processed are a representation of the field of action of that system.” (Pickering 1993, 126) The ecological approach of the New AI has its greatest impact by showing how it is possible “to learn to recognize objects and events without having any formal representation of them stored within the system.” (ibid, 127) The New Artificial Intelligence movement has abandoned the cognitivist perspective and now instead relies on the premise that intelligent behaviour should be analysed using synthetically produced equipment and control architectures (cf. Munakata 2008). Kate Crawford (Microsoft Research) has recently warned against the impact that current AI research might have, in a noteworthy lecture titled: AI and the Rise of Fascism. Crawford analysed the risks and potential of AI research and asked for a critical approach in regard to new forms of data-driven governmentality: “Just as we are reaching a crucial inflection point in the deployment of AI into everyday life, we are seeing the rise of white nationalism and right-wing authoritarianism in Europe, the US and beyond. How do we protect our communities – and particularly already vulnerable and marginalized groups – from the potential uses of these systems for surveillance, harassment, detainment or deportation?” (Crawford 2017) Following Crawford’s critical assessment, this issue of the Digital Culture & Society journal deals with the impact of AI in knowledge areas such as computational technology, social sciences, philosophy, game studies and the humanities in general. Subdisciplines of traditional computer sciences, in particular Artificial Intelligence, Neuroinformatics, Evolutionary Computation, Robotics and Computer Vision once more gain attention. Biological information processing is firmly embedded in commercial applications like the intelligent personal Google Assistant, Facebook’s facial recognition algorithm, Deep Face, Amazon’s device Alexa or Apple’s software feature Siri (a speech interpretation and recognition interface) to mention just a few. In 2016 Google, Facebook, Amazon, IBM and Microsoft founded what they call a Partnership on AI. (Hern 2016) This indicates a move from academic research institutions to company research clusters. We are in this context interested in receiving contributions on the aspects of the history of institutional and private research in AI. We would like to invite articles that observe the history of the notion of “artificial intelligence” and articles that point out how specific academic and commercial fields (e.g. game design, aviation industry, transport industry etc.) interpret and use the notion of AI. Against this background, the special issue Rethinking AI will explore and reflect the hype of neuroinformatics in AI discourses and the potential and limits of critique in the age of computational intelligence. (Johnston 2008; Hayles 2014, 199-210) We are inviting contributions that deal with the history, theory and the aesthetics of contemporary neuroscience and the recent trends of artificial intelligence. (cf. Halpern 2014, 62ff) Digital societies increasingly depend on smart learning environments that are technologically inscribed. We ask for the role and value of open processes in learning environments and we welcome contributions that acknowledge the regime of production as promoted by recent developments in AI. We particularly welcome contributions that are historical and comparative or critically reflective about the biological impact on social processes, individual behaviour and technical infrastructure in a post-digital and post-human environment? What are the social, cultural and ethical issues, when artificial neuronal networks take hold in digital cultures? What is the impact on digital culture and society, when multi-agent systems are equipped with license to act?
The externalization of man's rational capacities to robots and computers, which renders machines a semblance of man through the process of programming and simulation, overcomes human frailities. This has since the last two centuries become the profoundest achievements of Cybernetics and Artificial Intelligence (AI). The celebrated efficiency of these techno-scientific products in out doing human persons in assignments previously reserved for man such as: translations, warfare and industry raises the level of unemployment and comes with an epistemological challenge of man's intelligence. Not only does artificial intelligence threaten epistemological enquiries as presently constituted, these machines are also incapable of any moral responsibility for their actions. Haven realized that Artificial Intelligence if left unchecked constitutes a threat to human dignity and personhood and could even terminate the very humanity it seeks to assist, myriad philosophers now raise questions such as: Are intelligent agents capable of 'man-type' self reflective consciousness and rationality? Can AI. truly enjoy the same status with man? Can moral responsibility be ascribed to them? How should humanity treat and at its safeguard, these new automated members of our community? These humanistic concerns inform our present research, which primarily highlights the moral and epistemological implications of AI. on humanity. We argued for a redirection of AI. research and suggested a humanization of Artificial Intelligence that cloaks techno-scientific innovations with humanistic life jackets for man's preservation. The textual analysis method is adopted for this research.
This chapter offers a critical perspective on the contingent formation of artificial intelligence as a key sociotechnical institution in contemporary societies. It shows how the development of AI is not merely a product of functional technological development and improvement but depends just as much on economical, political, and discursive drivers. It builds on work from STS and critical algorithm studies surfacing that technological developments are always contingent on and resulting from transformations along multiple scientific trajectories as well as interaction between multiple actors and discourses. For our conceptual understanding of AI and its epistemology, this is a consequential perspective. It directs attention on different issues: away from detecting impact and bias ex post, and towards a perspective that centers on how AI is coming into being as a powerful sociotechnical entity. We illustrate this process in three key domains: technological research, media discourse, an...
2016
This paper deals with the philosophical problems concerned with research in the field of artificial intelligence (AI), in particular with problems arising out of claims that AI exhibits ‘consciousness’, ‘thinking ’ and other ‘inner ’ processes and that they simulate human intelligence and cognitive processes in general. The argument is to show how Cartesian mind is non-mechanical. Descartes’ concept of ‘I think ’ presupposes subjective experience, because it is ‘I ’ who experiences the world. Likewise, Descartes ’ notion of ‘I ’ negates the notion of computationality of the mind. The essence of mind is thought and the acts of thoughts are identified with the acts of consciousness. Therefore, it follows that cognitive acts are conscious acts, but not computational acts. Thus, for Descartes, one of the most important aspects of cognitive states and processes is their phenomenality, because our judgments, understanding, etc. can be defined and explained only in relation to consciousnes...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Imprint Academic, 2020
Course syllabus, 2024
Cambridge Scholars (https://www.cambridgescholars.com/product/978-1-0364-1585-3), 2024
Knowledge and Policy
Journal of Moral Theology, 2023
World Scientific, 2020
Monitoring of Public Opinion: Economic and Social Changes, 2021
Információs Társadalom, 2018
Minds and Machines, 1997
Ethnographic Studies, 2019
AI for Everyone? Critical Perspectives
Open Access Journal of Data Science and Artificial Intelligence, 2024
International Journal of Artificial Intelligence & Applications, 2019