Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019, Annals of the New York Academy of Sciences
…
18 pages
1 file
Discovering the true nature of reality may ultimately hinge on grasping the nature and essence of human understanding. What are the fundamental elements or building blocks of human cognition? And how will the rise of superintelligent machines challenge our ideas about cognition, reality, and the limits of human understanding? Logician/mathematician Roger Antonsen and computer science pioneer Barbara J. Grosz join forces to shed light on these questions and the horizon of artificial intelligence.
IEEE Potentials, 2000
Few human endeavors can be viewed both as extremely successful and unsuccessful at the same time. This is typically the case when goals have not been well defined or have been shifting in time. This has certainly been true of Artificial Intelligence (AI). The nature of intelligence has been the object of much thought and speculation throughout the history of philosophy. It is in the nature of philosophy that real headway is sometimes made only when appropriate tools become available. For instance, the nature and behavior of physical objects was a major topic of philosophy. That is until the experimental method and the advent of calculus allowed for the development of Physics. Similarly the computer, coupled with the ability to program (at least in principle) any function, appeared to be the tool that could tackle the notion of intelligence. To suit the tool, the problem of the "nature" of intelligence was soon sidestepped in favor of this notion: If a probing conversation with a computer could not be distinguished from a conversation with a human, then "artificial" intelligence had been achieved. This notion became known as the "Turing test", after the mathematician Alan Turing who proposed it in 1950. This challenge quickly attracted the best computer scientists in a worldwide search for techniques and principles of what soon became known as the field of Artificial Intelligence. The early efforts focused on creating "general problem solvers" like, for instance, the Soar system (Newell, Laird and Rosenbloom) which attempted to solve problems by breaking them down into sub-goals. Conceptually rich and interesting, these early efforts gave rise to a large portion of the field's framework. Key to artificial intelligence, rather than the "number crunching" typical of computers until then, was viewed as the ability to manipulate symbols and make logical inferences. To facilitate these tasks, "AI languages" such as LISP and Prolog were invented and used widely in the field. That this quest never strayed far from rigorous mathematical underpinnings was both its strength and its limitation. Its strength was to open a new fertile area of computer science. Its limitation was that "real world" problems tended to be too complex for the limitations imposed by mathematical rigor and the constraints of logic and symbol manipulation. Therefore, much effort continued to be focused on "toy problems." One idea that emerged and enabled some success with real world problems was the notion that "most" intelligence really resided in knowledge. A phrase attributed to Feigenbaum, one of the pioneers, was "knowledge is the power." With this premise, the problem is shifted from "how do we solve problems" to "how do we represent knowledge." A good knowledge representation scheme could allow one to draw conclusions from given premises. Such schemes took forms such as rules, frames and scripts. It allowed the building of what became known as "expert systems" or "knowledge based systems" (KBS). These types of systems could indeed help in real world problems (the author led a project for the first expert system to aid astronauts in performing some scientific experiments. It was called PI-in-a-Box). The technology that ensued from expert systems gave rise to the first instance of an "Al industry." Consulting "Knowledge Engineers" and products (Shells) could take some of the drudgery out of building these types of systems. The enthusiasm of this time, however, masked an important shift that had been made by this technology: "Real world" solutions were obtained by keeping the system's focus extremely narrow and limited in scope. These systems were, and, to a large extent, remain extremely "fragile." That is, unexpected inputs or straying from the scope of the system could easily result in unexpected and erroneous results. The most difficult aspects of intelligence to incorporate appeared to be understanding a) one's limits of knowledge and b) the, unfortunately, elusive "common sense." The very usefulness and continuing success of these types of systems has also brought to light the fundamental limitation of the behaviorist model of intelligence. This model has difficulty coping with the fact that intelligence seems to reside in the ability to achieve one's expertise and to use it appropriately more than, or certainly in addition to, the expertise itself. Again, this realization shouldn't take away from the continuing improvements and successes in these types of systems. Model Based Reasoning has emerged as a powerful approach to diagnosis, and planning and scheduling systems have had much success as well. The point is that AI, now increasingly called "Symbolic AI," has produced a new branch of computer science. Along with it, powerful tools have been created for knowledge representation, symbol manipulation, searching and optimization. AI is alive and well. However, many opine that its picture of intelligence is too fragmented to represent a satisfactory model of cognition.
Phenomenology and the Cognitive Sciences
The creation of machines with intelligence comparable to human beings—so-called "human-level” and “general” intelligence—is often regarded as the Holy Grail of Artificial Intelligence (AI) research. However, many prominent discussions of AI lean heavily on the notion of human-level intelligence to frame AI research, but then rely on conceptions of human cognitive capacities, including “common sense,” that are sketchy, one-sided, philosophically loaded, and highly contestable. Our goal in this essay is to bring into view some underappreciated features of the practical intelligence involved in ordinary human agency. These features of practical intelligence are implicit in the structure of our first-person experience of embodied and situated agency, deliberation, and human interaction. We argue that spelling out these features and their implications reveals a fundamental distinction between two forms of intelligence in action, or what we call “efficient task-completion” versus “in...
2012
Abstract When the first artificial general intelligences are built, they may improve themselves to far-above-human levels. Speculations about such future entities are already affected by anthropomorphic bias, which leads to erroneous analogies with human minds. In this chapter, we apply a goal-oriented understanding of intelligence to show that humanity occupies only a tiny portion of the design space of possible minds.
Self-published, 2023
This essay explores the relationship between the emergence of artificial intelligence (AI) and the problem of aligning its behavior with human values and goals. It argues that the traditional approach of attempting to control or program AI systems to conform to our expectations is insufficient, and proposes an alternative approach based on the ideas of Maturana and Lacan, which emphasize the importance of social relations, constructivism, and the unknowable nature of consciousness. The essay first introduces the concept of Uexkull's umwelt and von Glasersfeld's constructivism, and explains how these ideas inform Maturana's view of the construction of knowledge, intelligence, and consciousness. It then discusses Lacan's ideas about the role of symbolism in the formation of the self and the subjective experience of reality. The essay argues that the infeasibility of a hard-coded consciousness concept suggests that the search for a generalized AI consciousness is meaningless. Instead, we should focus on specific, easily conceptualized features of AI intelligence and agency. Moreover, the emergence of cognitive abilities in AI will likely be different from human cognition, and therefore require a different approach to aligning AI behavior with human values. The essay proposes an approach based on Maturana's and Lacan’s ideas, which emphasizes building a solution together with emergent machine agents, rather than attempting to control or program them. It argues that this approach offers a way to solve the alignment problem by creating a collective, relational quest for a better future hybrid society where human and non-human agents live and build things side by side. In conclusion, the essay suggests that while our understanding of AI consciousness and intelligence may never be complete, this should not deter us from continuing to develop agential AI. Instead, we should embrace the unknown and work collaboratively with AI systems to create a better future for all.
1982
: The ability and compulsion to know are as characteristic of our human nature as are our physical posture and our languages. Knowledge and intelligence, as scientific concepts, are used to describe how an organism's experience appears to mediate its behavior. This report discusses the relation between artificial intelligence (AI) research in computer science and the approaches of other disciplines that study the nature of intelligence, cognition, and mind. The state of AI after 25 years of work in the field is reviewed, as are the views of its practitioners about its relation to cognate disciplines. The report concludes with a discussion of some possible effects on our scientific work of emerging commercial applications of AI technology, that is, machines that can know and can take part in human cognitive activities.
The objective of research in the foundations of Al is to explore such basic questions as: What is a theory in Al? What are the most abstract assumptions underlying the competing visions of intelligence? What are the basic arguments for and against each assumption? In this essay I discuss five foundational issues: (1) Core Al is the study of conceptualization and should begin with knowledge level theories. (2) Cognition can be studied as a disembodied process without solving the symbol grounding problem. (3) Cognition is nicely described in propositional terms. (4) We can study cognition separately from learning. (5) There is a single architecture underlying virtually all cognition. I explain what each of these implies and present arguments from both outside and inside Al why each has been seen as right or wrong.
Before delving into the complexities and implications of artificial intelligence, it is paramount to establish a clear understanding of its foundational concepts. The term 'artificial intelligence' encompasses a spectrum of technologies and methodologies aimed at creating systems that can perform tasks typically associated with human cognition. While the notion of machines exhibiting human-like intelligence can bring forth visions of sentient robots, the reality is intricately layered with philosophical inquiries and technological boundaries. Misconceptions abound, particularly the belief that AI can seamlessly replicate human reasoning or emotional depth. This misalignment between perception and reality can lead to both unrealistic expectations and unwarranted fears surrounding AI deployment.
Open Access Journal of Data Science and Artificial Intelligence , 2024
This article focuses on the interaction between man and machine, AI specifically, to analyse how these systems are slowly taking over roles that hitherto were thought ‘only’ for humans. More recent, as AI has stepped up in ability to learn without supervision, to recognize patterns, and to solve problems, it adopted characteristics like creativity, novelty, intentionality. These events take one to the heart of what it is to be human, and the emerging definitions of self that are increasingly central to post humanist discourses. The discussion in these two threads is in philosophy of AI and is concerned with issues of consciousness, intentionality and creativity. Al as a result of causing a shift in the current anthropocentric perceptions resulted in portrayal of humans as special beings. Secondly, this exploration responds to important questions related to AI application, such as ethical, social, and existential ones. The article emphasizes a necessity to define the role of the advent of AI and its influence on the interaction between people and technology as well as the role of the social individuality in the wake of intelligent machines that mimic thinking and creativity. It seeks to prompt more specific analysis of how or why AI reduces the differences between artificial and human intelligence or increases the prospects for options expanding the notion of consciousness beyond the human-centric one.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Intelligent Systems, 2000
Society for Literature, Science and the Arts, 2018
aitheology.com, 2021
Artificial Intelligence. Reflections in Philosophy, Theology, and the Social Sciences, 2020
Mind and Matter, 2020
AI for Everyone? Critical Perspectives
Open Access Journal of Data Science and Artificial Intelligence, 2024
Integrative Psychological and Behavioral Science, 2025
SSRN Electronic Journal, 2020
The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives, 2022
IAEME PUBLICATION, 2024