Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2024, Course syllabus
…
10 pages
1 file
Syllabus for 2024 version of Topics in Philosophy of AI, taught at the core module for the MA in Philosophy of AI, at the University of York. This course will explore social, political, moral, metaphysical, and epistemological issues surrounding artificial intelligence. We will explore questions like: What would it take for machines to have subjective experiences? Could machines deserve moral treatment? Can machines create art? How have new technologies affected the roles of traditionally marginalized groups? Can technology be racist? How does technology affect our social interactions with each other? What can we learn about the human mind by inventing intelligent machines?
The Ethics of Artificial Intelligence, 2024
Subjectivity
Immersed in the networks of artificial intelligences that are constantly learning from each other, the subject today is being configured by the automated architecture of a computational sovereignty (Bratton 2015). All levels of decisionmaking are harnessed in given sets of probabilities where the individuality of the subject is broken into endlessly divisable digits. These are specifically reassembled at check points (Deleuze in Negotiations: 1972-1990, Columbia University Press, New York, 1995), in ever growing actions of predictive data (Cheney-Lippold in We are data and the making of our digital selves, NYU Press, New York, 2017), where consciousness is replaced by mindless computations (Daston in "The rule of rules", lecture Wissenschaftskolleg Berlin, November 21st, 2010). As a result of the automation of cognition, the subject has thus become ultimately deprived of the transcendental tool of reason. This article discusses the consequences of this crisis of conscious cognition by the hands of machines by asking whether the servomechanic model of technology can be overturned to expose the alien subject of artificial intelligence as a mode of thinking originating at, but also beyond, the transcendental schema of the self-determining subject. As much as the socio-affective qualities of the user have become the primary sources of capital abstraction, value, quantification and governmental control, so has technology, as the means of abstraction, itself changed nature. This article will suggest that the cybernetic network of communication has not only absorbed physical and cognitive labour into its circuits of reproduction, but is, more importantly, learning from human culture, through the data analysis of behaviours, the contextual use of content and the sourcing of knowledge. The theorisation of machine learning as involving a process of thinking will be taken here as a fundamental inspiration to argue that the expansion of an
AI for Everyone? Critical Perspectives
This chapter looks at artificial intelligence, its history, and its evolutionary stages. Furthermore, what challenges might arise in the future when humans will have to learn to live among machines and robots are discussed. This will be undertaken by analyzing challenges concerning algorithms and organisations, challenges with respect to (un)employment, and looking at democracy and freedom potentially jeopardised due to the progress of AI.
2021
The problem of “artificial intelligence” is becoming more and more relevant today. This topic is also of great interest to philosophers. This article considers primarily a retrospective analysis of the study of the possibilities of artificially created mechanisms that first perform primitive actions and then more complex ones, including thought processes. Then the article provides a particular philosophical analysis of the concept of “artificial intelligence”, its capabilities and potential danger.
The externalization of man's rational capacities to robots and computers, which renders machines a semblance of man through the process of programming and simulation, overcomes human frailities. This has since the last two centuries become the profoundest achievements of Cybernetics and Artificial Intelligence (AI). The celebrated efficiency of these techno-scientific products in out doing human persons in assignments previously reserved for man such as: translations, warfare and industry raises the level of unemployment and comes with an epistemological challenge of man's intelligence. Not only does artificial intelligence threaten epistemological enquiries as presently constituted, these machines are also incapable of any moral responsibility for their actions. Haven realized that Artificial Intelligence if left unchecked constitutes a threat to human dignity and personhood and could even terminate the very humanity it seeks to assist, myriad philosophers now raise questions such as: Are intelligent agents capable of 'man-type' self reflective consciousness and rationality? Can AI. truly enjoy the same status with man? Can moral responsibility be ascribed to them? How should humanity treat and at its safeguard, these new automated members of our community? These humanistic concerns inform our present research, which primarily highlights the moral and epistemological implications of AI. on humanity. We argued for a redirection of AI. research and suggested a humanization of Artificial Intelligence that cloaks techno-scientific innovations with humanistic life jackets for man's preservation. The textual analysis method is adopted for this research.
Minds and Machines, 2017
Pakistan Journal of Education, 2023
Artificial intelligence (AI) generally refers to the science of creating machine based algorithmic models that carry out tasks inspired by human intelligence, such as speech-image recognition, learning, analyzing, decision making, problem solving, and planning. It has a profound impact on how we evaluate the world, technology, morality, and ethics and how we perceive a human being including its psychology, physiology, and behaviors. Hence, AI is an interdisciplinary field that requires the expertise of various fields such as neuroscientists, computer scientists, philosophers, jurists and so forth. In this sense, instead of delving into deep technical explanations and terms, in this paper we aimed to take a glance at how AI has been defined and how it has evolved from Greek myths into a cutting-edge technology that affects various aspects of our lives, from healthcare to education or manufacturing to transportation. We also discussed how AI interacts with philosophy by providing examples and counter examples to some theories or arguments focusing on the question of whether AI systems are capable of truly human-like intelligence or even surpassing human intelligence. In the last part of the article, we emphasized the critical importance of identifying potential ethical concerns posed by AI implementations and the reasons why they should be taken cautiously into account.
Philosophy Papers (PhilPapers), 2024
This paper examines the ontological and epistemological implications of artificial intelligence (AI) through posthumanist philosophy, integrating the works of Deleuze, Foucault, and Haraway with contemporary computational methodologies. It introduces concepts such as negative augmentation, praxes of revealing, and desedimentation, while extending ideas like affirmative cartographies, ethics of alterity, and planes of immanence to critique anthropocentric assumptions about identity, cognition, and agency. By redefining AI systems as dynamic assemblages emerging through networks of interaction and co-creation, the paper challenges traditional dichotomies such as human versus machine and subject versus object. Bridging analytic and continental philosophical traditions, the analysis unites formal tools like attribution analysis and causal reasoning with the interpretive and processual methodologies of continental thought. This synthesis deepens the understanding of AI's epistemic and ethical dimensions, expanding philosophical inquiry while critiquing anthropocentrism in AI design. The paper interrogates the spatial foundations of AI, contrasting Euclidean and non-Euclidean frameworks to examine how optimization processes and adversarial generative models shape computational epistemologies. Critiquing the reliance on Euclidean spatial assumptions, it positions alternative geometries as tools for modeling complex, recursive relationships. Furthermore, the paper addresses the political dimensions of AI, emphasizing its entanglements with ecological, technological, and sociopolitical systems that perpetuate inequality. Through a politics of affirmation and intersectional approaches, it advocates for inclusive frameworks that prioritize marginalized perspectives. The concept of computational qualia is also explored, highlighting how subjective-like dynamics emerge within AI systems and their implications for ethics, transparency, and machine perception. Finally, paper calls for a posthumanist framework in AI ethics and safety, emphasizing interconnectivity, plurality, and the transformative capacities of machine intelligence. This approach advances epistemic pluralism and reimagines the boundaries of intelligence in the digital age, fostering novel ontological possibilities through the co-creation of dynamic systems.
The meaning of AI has undergone drastic changes during the last 60 years of AI discourse(s). What we talk about when saying “AI” is not what it meant in 1958, when John McCarthy, Marvin Minsky and their colleagues started using the term. Take game design as an example: When the Unreal game engine introduced "AI" in 1999, they were mainly talking about pathfinding. For Epic Megagames, the producers of Unreal, an AI was just a bot or monster whose pathfinding capabilities had been programmed in a few lines of code to escape an enemy. This is not "intelligence" in the Minskyan understanding of the word (and even less what Alan Turing had in mind when he designed the Turing test). There are also attempts to differentiate between AI, classical AI and "Computational Intelligence" (Al-Jobouri 2017). The latter is labelled CI and is used to describe processes such as player affective modelling, co-evolution, automatically generated procedural environments, etc. Artificial intelligence research has been commonly conceptualised as an attempt to reduce the complexity of human thinking. (cf. Varela 1988: 359-75) The idea was to map the human brain onto a machine for symbol manipulation – the computer. (Minsky 1952; Simon 1996; Hayles 1999) Already in the early days of what we now call “AI research” McCulloch and Pitts commented on human intelligence and proposed in 1943 that the networking of neurons could be used for pattern recognition purposes (McCulloch/Pitts 1943). Trying to implement cerebral processes on digital computers was the method of choice for the pioneers of artificial intelligence research. The “New AI” is no longer concerned with the needs to observe the congruencies or limitations of being compatible with the biological nature of human intelligence: “Old AI crucially depended on the functionalist assumption that intelligent systems, brains or computers, carry out some Turing-equivalent serial symbol processing, and that the symbols processed are a representation of the field of action of that system.” (Pickering 1993, 126) The ecological approach of the New AI has its greatest impact by showing how it is possible “to learn to recognize objects and events without having any formal representation of them stored within the system.” (ibid, 127) The New Artificial Intelligence movement has abandoned the cognitivist perspective and now instead relies on the premise that intelligent behaviour should be analysed using synthetically produced equipment and control architectures (cf. Munakata 2008). Kate Crawford (Microsoft Research) has recently warned against the impact that current AI research might have, in a noteworthy lecture titled: AI and the Rise of Fascism. Crawford analysed the risks and potential of AI research and asked for a critical approach in regard to new forms of data-driven governmentality: “Just as we are reaching a crucial inflection point in the deployment of AI into everyday life, we are seeing the rise of white nationalism and right-wing authoritarianism in Europe, the US and beyond. How do we protect our communities – and particularly already vulnerable and marginalized groups – from the potential uses of these systems for surveillance, harassment, detainment or deportation?” (Crawford 2017) Following Crawford’s critical assessment, this issue of the Digital Culture & Society journal deals with the impact of AI in knowledge areas such as computational technology, social sciences, philosophy, game studies and the humanities in general. Subdisciplines of traditional computer sciences, in particular Artificial Intelligence, Neuroinformatics, Evolutionary Computation, Robotics and Computer Vision once more gain attention. Biological information processing is firmly embedded in commercial applications like the intelligent personal Google Assistant, Facebook’s facial recognition algorithm, Deep Face, Amazon’s device Alexa or Apple’s software feature Siri (a speech interpretation and recognition interface) to mention just a few. In 2016 Google, Facebook, Amazon, IBM and Microsoft founded what they call a Partnership on AI. (Hern 2016) This indicates a move from academic research institutions to company research clusters. We are in this context interested in receiving contributions on the aspects of the history of institutional and private research in AI. We would like to invite articles that observe the history of the notion of “artificial intelligence” and articles that point out how specific academic and commercial fields (e.g. game design, aviation industry, transport industry etc.) interpret and use the notion of AI. Against this background, the special issue Rethinking AI will explore and reflect the hype of neuroinformatics in AI discourses and the potential and limits of critique in the age of computational intelligence. (Johnston 2008; Hayles 2014, 199-210) We are inviting contributions that deal with the history, theory and the aesthetics of contemporary neuroscience and the recent trends of artificial intelligence. (cf. Halpern 2014, 62ff) Digital societies increasingly depend on smart learning environments that are technologically inscribed. We ask for the role and value of open processes in learning environments and we welcome contributions that acknowledge the regime of production as promoted by recent developments in AI. We particularly welcome contributions that are historical and comparative or critically reflective about the biological impact on social processes, individual behaviour and technical infrastructure in a post-digital and post-human environment? What are the social, cultural and ethical issues, when artificial neuronal networks take hold in digital cultures? What is the impact on digital culture and society, when multi-agent systems are equipped with license to act?
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
CAN AN AI BE SENTIENT? Multiple perspectives on sentience and on the potential ethical implications of the rise of sentient AI Notes n° 2, 2022
Monitoring of Public Opinion: Economic and Social Changes, 2021
Open Access Journal of Data Science and Artificial Intelligence, 2024
Land Forces Academy Review
Scientia et Fides, 2018
Encyclopedia of the Philosophy of Law and Social Philosophy, 2022
KI - Künstliche Intelligenz
AI & SOCIETY, 2013
Open Access Journal of Data Science and Artificial Intelligence , 2024
Imprint Academic, 2020
Journal of Moral Theology, 2023
BRAIN. Broad Research in Artificial Intelligence and Neuroscience