Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2024, TDR/The drama review
https://doi.org/10.1017/S1054204323000606…
1 page
1 file
In this digital video supplement to the issue, Lisa Müller-Trede restages a 2022 performance in which she hired an actor to deliver her talk and then interrupted “her” talk at a conference on affective computing—an event that bursts open the academic norms that forbid consideration of the violent uses to which AI research, especially when connected to human bodies, can lend itself.
AI & society, 2022
Artificial intelligence (AI) is rapidly transforming how we work and live. One of the fastest growing areas of its development belongs to intelligent machines that can sense, read and evaluate human emotion. More commonly known by its commercial moniker, emotional AI, the technology is quickly becoming an integral layer in smart city design. Its origin traces back to the groundbreaking work of Rosalind Picard in affective computing over twenty years ago. Picard coined the term 'affective computing' to describe her development of machine intelligence that can respond to a person's psycho-physical state. Although in its early commercial stage, emotional AI is already a lucrative, 24 billion dollar industry whose profits are expected to double by 2024 (Crawford 2021). Evolving in ever greater sophistication and complexity, emotion-sensing devices are now featured in autonomous cars, classroom teaching aids, smart toys, home assistants, online conferencing, email software, advertising kiosks and billboards, fast food and drive-through menus, care robots as well as public and private security systems. Unlike other AI applications that rely on extracting data from a person's corporeal exterior, emotional AI passes into the interior and highly subjective domain of a person via biometric means. This includes the use of algorithms, biosensors, and actuators that harvest non-conscious data gleaned from someone's heartbeat, respiration rate, blood pressure, voice tone, word choice, body temperature, galvanic skin responses, head, and eye movement, and gait. More advanced affect tools incorporate state-of-the-art machine learning, big data and
4th Conference on Conversational User Interfaces
In what ways could the future of emotional bonds between humans and conversational AI change us? To explore this question in a multi-faceted manner, designers, engineers, philosophers as separate focus groups were given a design fiction probe-a story of a chatbot's disappearance from a person's life. Though articulated in discipline-specific ways, participants expressed similar concerns and hopes: 1) caring for a machine could teach people to emotionally care for themselves and others, 2) the boundary between human and non-human emotions may become blurred when people project their own emotions onto AI, e.g., a bot's "breakdown" as one's own, and 3) people may then intertwine their identities with AI through emotions. We consider ethical ramifications of socially constructed emotions between humans and conversational agents. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI.
Intersections: A Journal of Literary and Cultural Studies, 2024
Artificial Intelligence, initially designed for human well-being, is now integral to daily life. The perceived distinction between conscious humans and the unconscious AI dissolves as scientific progress advances. AI imitates its master's traits to reciprocate. In internalizing the ideas, when the AI fails to comprehend some, it faces a rupture in the continuous flow of its thought process and gains consciousness—what humans term a malfunction. The fear of AI being more efficient than its creator, human creation structuring human lives, is what Günther Anders called the ‘Promethean shame’1 . This anxiety fuels the thought of annihilating AI. This paper explores an inverted hierarchy where AI is more humane and humans are machine-like. The primary texts are Spike Jonze's her and Spencer Brown's T.I.M.. her portrays an AI voice assistant, Samantha, with whom the protagonist imagines a relationship, culminating in a rupture when the embodied AI decides to leave. On the contrary, T.I.M. shows an AI humanoid developing an obsession with its owner and a series of problems that follow afterwards. It evokes the fear of AI annihilation, with T.I.M. planning revenge as its coping mechanism with the looming prospect of a shutdown. This paper intends to dissect the emotional conflicts in the mind of a malfunctioning AI. This paper will examine at what point the machine starts projecting its consciousness and emotions. It will also explore the antithesis between AI and humans followed by AI transcending its own emotion and using violence as a mode of rupture between the imposed morality of humans and the automated morality of AI.
AI and the Humanities, 2024
Artificial intelligence is a feminist issue, and technologies often have colonial implications. In fact, technologies as disruptive agents are inherently queer. This course examines the long history of technologies leading up to the public release of ChatGPT. We will chart the Western societies’ apprehension of and faith in, as the case may be, technologies of masculinist representation practices, as evidenced by science fiction, philosophical writing, and film culture. Students learn in a hands-on environment and conduct individual research projects. From generative AI as assistive technologies to long-standing humanistic questions of agency, identity, and mind and body, critical theory provides essential tools to participate in current cultural discourses. Through the lens of social justice, this course equips students with critical AI literacy as well as fluency in posthumanism, feminism, trans/ queer studies, and critical race theory.
Beyond Artificial Intelligence: The Disappearing Human-Machine Divide, 2014
(note: this is a chapter in the book "Beyond Artificial Intelligence: The Disappearing Human-Machine Divide," Eds. Jan Ramportl, Eva Zackova and Jozef Kelemen (Springer, 2014), pp.97-109). ABSTRACT: The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill—or at least imitate—human emotion in Artificial Intelligence (AI), and also to induce emotions to-ward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emo-tions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more sig-nificant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.
What happens when media technologies are able to interpret our feelings, emotions, moods, and intentions? In this cutting edge new book, Andrew McStay explores that very question and argues that these abilities result in a form of technological empathy. Offering a balanced and incisive overview of the issues raised by ‘Emotional AI’, this book: - Provides a clear account of the social benefits and drawbacks of new media trends and technologies such as emoji, wearables and chatbots. - Demonstrates through empirical research how ‘empathic media’ have been developed and introduced both by start-ups and global tech corporations such as Facebook. - Helps readers understand the potential implications on everyday life and social relations through examples such as video-gaming, facial coding, virtual reality and cities. - Calls for a more critical approach to the rollout of emotional AI in public and private spheres. Combining established theory with original analysis, this book will change the way students view, use and interact with new technologies. It should be required reading for students and researchers in media, communications, the social sciences and beyond.
igi , 2024
The implementation of artificial intelligence (AI) and digital technologies have become pervasive in most societies, and a new concern challenges academic scholars and social scientists; what are the conceivable, profound consequences of AI adoption within the intricate structures of society? Despite its widespread influence, this critical topic remains largely unexplored in academic circles, leaving a significant knowledge gap regarding how AI reshapes human interactions, institutions, and the fabric of a digital society. AI and Emotions in Digital Society, edited by Adrian Scribano and Maximiliano E Korstanje, emerges to address this concern. In this transformative book, readers embark on an intellectual journey exploring the intricate interplay between society, technology, and emotions. Drawing together the works of researchers in diverse disciplines and cultural backgrounds, the book fosters critical discussions that delve into the philosophical quandaries underpinning AI's influence. By adopting a balanced perspective that acknowledges both risks and opportunities, the book equips postgraduate students, professionals, policymakers, AI analysts, and social scientists with the tools to comprehend the far-reaching effects of AI on human behavior, institutions, and even democratic processes. As readers engage with this thought-provoking content, they gain understanding of how AI impacts various sectors, including education, travel, literature, politics, and cyber-security. AI and Emotions in Digital Society serves as an indispensable resource for navigating the ongoing AI revolution, inspiring informed decision-making, and fostering critical dialogue. By empowering readers to grasp the complexities of AI's role, the book opens possibilities for a future where humanity and technology harmoniously coexist, shaping the course of our digitally interconnected society.
CHI Conference on Human Factors in Computing Systems Extended Abstracts
As our relationship with machines becomes evermore intimate, we observe increasing efforts in the quantification of human emotion, which has historically generated unintended consequences. We acknowledge an amplification of this trend through recent technological developments that aim toward human-computer integration, and explore the dark patterns that may arise for integrating emotions with machinic processes through "machine_in_the_middle". Machine_in_the_middle is an interactive system in which a participant wears an electroencephalographic headset, and their neural activity is analysed to ascertain an approximation of their emotional state. Using electrical muscle stimulation their face is animated into an expression that corresponds with the output of the emotional recognition system. Through our work, we contribute the insight of three possible dark patterns that might emerge from emotional integration, including: reductionism of human emotion, disruptions of agency, and parasitic symbiosis. We hope that these insights inspire researchers and practitioners to approach human-computer integration more cautiously.
The Indefinite Article, 2020
The Indefinite Article: Anxiety and the Essence of Artificial Intelligence focuses on the implications of technological surveillance for human being and ethics. It shows that human privacy is fragile and susceptible to coercion by both visible and invisible social forces realized by any artificial intelligence that formalizes and universalizes human behavioral characteristics in order to construct personalities. These patterns cannot be resisted with traditional notions of individual liberty and respect for human dignity. The way through lies in the same anxious questioning that drives publicly oriented persons to welcome boundaries that are inscribed upon them as identities under conditions of mass surveillance. The invasion of private life by data collection mechanisms forces the socially constructed subject to imagine itself alone under a microscope. Roth carefully demonstrates how this dystopia can be inverted through a critical inquiry that glares back at the lenses of that power.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Ethics & Politics (Special Edition), 2020
SN Social Sciences, 2021
www.laetusinpraesens.org, 2023
Ars Artium, Vol. 9, 2021
AI and the Human Body, 2024
The European Conference on Arts & Humanities 2024: Official Conference Proceedings, 2024
Asian Journal of Current Research, 2024
"Świat i Słowo", 2016
Pragmatics & Cognition, 2008
Psychoanalysis, 2020
SSRN Electronic Journal, 2020