Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, Procedia Computer Science
…
6 pages
1 file
We present a socio-human cognitive framework that radically deemphasizes the role of individual human agents required for both the formation of social systems and their ongoing operation thereafter. Our point of departure is Simondon’s (1992) theory of individuation, which we integrate with the enactive theory of cognition (Di Paolo et al., 2010) and Luhmann’s (1996) theory of social systems. This forges a novel view of social systems as complex, individuating sequences of communicative interactions that together constitute distributed yet distinct cognitive agencies, acquiring a capacity to exert influence over their human-constituted environment. We conclude that the resulting framework suggests several different paths of integrating AI agents into human society. One path suggests the emulation of a largely simplified version of the human mind, reduced in its functions to a specific triple selection-making which is necessary for sustaining social systems. Another one conceives AI systems that follow the distributed, autonomous architecture of social systems, instead that of humans.
2021
We argue the case that human social systems are distinct cognitive agents operating in their own self-constructed environments. Our point of departure is Luhmann's (1996) theory of social systems as self-organizing relationships between communications. Applying to the Luhmannian model of social systems the enactive theory of cognition (Di Paolo et al., 2010) and Simondon's (1992) theory of individuation, results in a view of social systems as complex, individuating sequences of communicative interactions that together constitute distributed yet autonomous cognitive agencies. Our argument is based on a broader understanding of cognition as sense-making, which precedes the existence of a consolidated cognitive agent to whom the activity of sense-making can be attributed. Instead, we see cognitive activity as a process by which the actual agents are formed. This brings us to conclude that though there is `nobody there' in the essentialist sense, human social systems constitute distributed yet distinct and integrated loci of autonomous cognitive activity.
Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. We argue that this type of autonomous social machines has provided a new paradigm for the design of intelligent systems marking a new phase in the field of AI. The consequences of this observation range from methodological, philosophical to ethical. On the one side, it emphasises the role of Human-Computer Interaction in the design of intelligent systems, while on the other side it draws attention to both the risks for a human being and ...
Next society might be that in which computer-based artificial intelligences begin to take part in communication. We therefore need to rethink one of modern society’s most cherished ideas: that only humans qualify for communication. We have driven spirits, gods, and devils, animals, and plants out of this realm. This paper looks into notions of society, communication, and the social propounded by the theory of social systems to investigate how and when artificial intelligences will be able to join human beings in that most demanding undertaking, communication. Independence, self-reference, and complexity as identified as some of the conditions artificial intelligences will have to fulfill. It will take new structures and a new culture for society to live up to this.
Integrative Psychological and Behavioral Science, 2020
Can artificial intelligence (AI) develop the potential to be our partner, and will we be as sensitive to its social signals as we are to those of human beings? I examine both of these questions and how cultural psychology might add such questions to its research agenda. There are three areas in which I believe there is a need for both a better understanding and added perspective. First, I will present some important concepts and ideas from the world of AI that might be beneficial for pursuing research topics focused on AI within the cultural psychology research agenda. Second, there are some very interesting questions that must be answered with respect to central notions in cultural psychology as these are tested through human interactions with AI. Third, I claim that social robots are parasitic to deeply ingrained human social behaviour, in the sense that they exploit and feed upon processes and mechanisms that evolved for purposes that were originally completely alien to human-computer interactions.
2016
We argue the case that human social systems and social organizations in particular are concrete, non-metaphorical, cognitive agents operating in their own self-constructed environments. Our point of departure is Luhmann’s (1996) theory of social systems as self-organizing systems of communications. Integrating the Luhmannian theory with the enactive theory of cognition (Di Paolo et al., 2010) and Simondon’s (1992) theory of individuation, results in a novel view of social systems as complex, individuating sequences of communicative interactions that together constitute distributed yet distinct cognitive agencies. The relations of such agencies with their respective environments (involving other agencies of the same construction) is further clarified by discussing both the Hayek-Hebb (Hebb; 1949; Hayek, 1952; Edelman, 1987) and the perturbation-compensation (Maturana & Varela, 1980) perspectives on systems adaptiveness as each reveals different and complementary facets of the operat...
The Matrix of AI Agency, 2024
Recent developments in artificial intelligence (AI) make the question of machine agency a pressing matter. Contrary to the idea that agency is the inherent quality of a system, we argue that agency should be seen as a social status, or more precisely, a socially granted license to issue actions that is acquired and monitored in social practices. From this perspective, we develop criteria for the theoretical demarcation of agents and nonagents to distinguish entities based on their attributed abilities and their relative power in social networks. We derive a matrix of different types of agents and show how this matrix can inform empirical studies on AI.
Monitoring of Public Opinion: Economic and Social Changes
Introduction to the Special Issue of the Journal "Artificial Intelligence and Artificial Sociality: New Phenomena and Challenges for the Social Sciences"
Journal of Artificial General Intelligence, vol. 3, no. 2012, DOI: 10.2478/v10229-011-0015-3
Instead of connecting “autonomy” with a requirement of total self-sufficiency and capability of operation in isolation, it is much more reasonable to center our efforts towards positioning the intelligent entities created through cognitive architectures appropriately within human-machine social networks. In this way, our intelligent entities can externally offload their physical and informational functions when needed, and harmoniously and empathetically integrate within ecosystems containing many more intelligent entities. Thus, they could be participating in and actively helping in the organization of much wider entities beyond themselves. As a result of this, they can substantially increase their individual effective intelligence, i.e. their capacity to reach self-selected goals, while also creating a collective intelligence that far surpasses the limits of any individual within the wider entities that they are participating in.
L'Harmattan, 2020
Thanks to the development of the NBIC, man has exterorised his intelligence and developed what has come to be known as Artificial Intelligence (AI). This has contributed in enhancing or ameliorating the living conditions of humanity in one way or the other. Yet, the more man uses the new companions given to him by the High-tech industry, the more he tends to forget his fellow man. His existence is thus more and more mechanised so much so that AI has implications on social life. Examining these implications is the aim of this paper.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Anna Strasser (2020). From tools to social agents. Rivista Italiana di Filosofia del Linguaggio (RIFL), 14 (2), 76-87, 2020
Online journal of communication and media technologies, 2023
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 2001
Self-published, 2023
Ai & Society, 2010
AI & Society, 1992
Proceedings of the AISB Annual Convention , 2017
Neural Network World
Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation By Ron Sun Cambridge Univ Press, 2006, pp.355-390., 2006