Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, Encyclopedia of Consciousness
…
44 pages
1 file
Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing. However, there have been proposals for how consciousness would be accounted for in a complete computational theory of the mind, from theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis, Sloman, and Smith. One can extract from these speculations a sketch of a theoretical synthesis, according to which consciousness is the property a system has by virtue of modeling itself as having sensations and making free decisions. Critics such as Harnad and Searle have not succeeded in demolishing a priori this or any other computational theory, but no such theory can be verified or refuted until and unless AI is successful in finding computational solutions of difficult problems such as vision, language, and locomotion.
arXiv (Cornell University), 2023
We have defined the Conscious Turing Machine (CTM) for the purpose of investigating a Theoretical Computer Science (TCS) approach to consciousness. For this, we have hewn to the TCS demand for simplicity and understandability. The CTM is consequently and intentionally a simple machine. It is not a model of the brain, though its design has greatly benefited-and continues to benefit-from neuroscience and psychology. The CTM is a model of and for consciousness. Although it is developed to understand consciousness, the CTM offers a thoughtful and novel guide to the creation of an Artificial General Intelligence (AGI). For example, the CTM has an enormous number of powerful processors, some with specialized expertise, others unspecialized but poised to develop an expertise. For whatever problem must be dealt with, the CTM has an excellent way to utilize those processors that have the required knowledge, ability, and time to work on the problem, even if it is not aware of which ones these may be.
International Journal of Machine Consciousness, 2009
The question about the potential for consciousness of arti¯cial systems has often been addressed using thought experiments, which are often problematic in the philosophy of mind. A more promising approach is to use real experiments to gather data about the correlates of consciousness in humans, and develop this data into theories that make predictions about human and arti¯cial consciousness. A key issue with an experimental approach is that consciousness can only be measured using behavior, which places fundamental limits on our ability to identify the correlates of consciousness. This paper formalizes these limits as a distinction between type I and type II potential correlates of consciousness (PCCs). Since it is not possible to decide empirically whether type I PCCs are necessary for consciousness, it is indeterminable whether a machine that lacks neurons or hemoglobin, for example, is potentially conscious. A number of responses have been put forward to this problem, including suspension of judgment, liberal and conservative attribution of the potential for consciousness and a psychometric scale that models our judgment about the relationship between type I PCCs and consciousness.
scip Labs, 2018
Awake consciousness comprises the complexities of perception, attention and functions of information processing. Artificial intelligence integrates machine consciousness research regarding existence and framework development. Processes, functions, benefits as well as ethical and legal implications are in the center of scientific discourse. Demystifying consciousness of humans and machines can yield great benefits for society, for example in the field of healthcare or jurisdiction.
The consciousness which opens us to a representation of a world otherwise closed on itself, is a fundamental attribute of nature, an essential operator in the genesis of living structures and the cognitive processes associated with them. Consciousness is the key to life. In its absence no life would have appeared on Earth or on any exoplanet. The 'computational theory of the mind' where the human mind would function as a computer machine is totally unfounded. A robot built only on the basis of the relationship between technical components managed by physical laws cannot be fundamentally autonomous, self-organized, like human beings are. It is only a more or less efficient automaton operating in an environment that has been specifically defined by his manufacturer who is naturally endowed with a consciousness which is formally irreducible to any physical interaction.
Philosophies
Consciousness and intelligence are properties that can be misunderstood as necessarily dependent. The term artificial intelligence and the kind of problems it managed to solve in recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell’s analogy, if a machine can do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights than a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe the obvious answer is no, as problem-solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness, at least, cannot b...
2007
Abstract This paper embodies the authors' suggestive, hypothetical and sometimes speculative attempts to answer questions related to the interplay between consciousness and AI. We explore the theoretical foundations of consciousness in AI systems. We provide examples that demonstrate the potential utility of incorporating functional consciousness in cognitive AI systems.
International Journal of Philosophy, 2020
The study of consciousness has become the "most precious trophy" of neuroscience, artificial intelligence (AI) and psychology alike. Because consciousness is part to the primary dimension of the mind, indeed, the only one we can access directly, and because consciousness is what gives us our knowledge of the world and of ourselves, its scientific study will bring us closer to understanding the very nature of what we are as individuals. The study of consciousness engages the thorny issue of whether we are free beings, individuals who exercise free will and are responsible for our actions. Since today, the concept most widely held in the sciences that elucidate the functioning of the human mind is that, basically, human beings are a complex physiological computational mechanism; it is easy to understand how deciphering the nature of consciousness can constitute a "threat" to the principle of the individual's moral responsibility. The mechanistic theory warns that freedom, understood as the ability to make decisions that are not circumscribed by any type of rule or pre-established process, may be a mere illusion based on a false sense of "control". This may seem to be the case, just as for centuries it seemed that the sun rotated around the earth, as that was the impression conveyed to the senses. Along these lines, when we think consciously, we consider the advantages and disadvantages of different options, and make decisions on the basis of an evaluation of the best alternative. However, under the mechanistic premise, all of life's experiences would be reduced to the culmination of unconscious processes that are beyond our control, and the experience of consciousness would exert no causal function over our actions or our internal states of mind. The main aim in this article is to discuss some of the weaknesses of a strictly mechanistic explanation of how the human works. To do this, it presents a critical review of some of the different approaches to the study of consciousness advanced to date and concludes with the submission of an own proposal.
arXiv (Cornell University), 2022
Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell's analogy, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work.
AI & Society, 2023
The possibility of AI consciousness depends much on the correct answer to the mind-body problem: how our materialistic brain generates subjective consciousness? If a materialistic answer is valid, machine consciousness must be possible, at least in principle, though the actual instantiation of consciousness may still take a very long time. If a non-materialistic one (either mentalist or dualist) is valid, machine consciousness is much less likely, perhaps impossible, as some mental element may also be required. Some recent advances in neurology (despite the separation of the two hemispheres, our brain as a whole is still able to produce only one conscious agent; the negation of the absence of a free will, previously thought to be established by the Libet experiments) and many results of parapsychology (on medium communications, memories of past lives, near-death experiences) suggestive of survival after our biological death, strongly support the non-materialistic position and hence the much lower likelihood of AI consciousness. Instead of being concern about AI turning conscious and machine ethics, and trying to instantiate AI consciousness soon, we should perhaps focus more on making AI less costly and more useful to society.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Research on Consciousness and Artificial Intelligence, 2024
Psyche Problems, Perspectives, 2001
Minds and Machines, 2010
AINTELIA SCIENCE NOTES, 2023
Journal of Artificial Intelligence and Consciousness, 2021
IEEE International Advanced Computing Conference ( …
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2009
Proceedings of the National Academy of Sciences
Recent Research Advances in Arts and Social Studies, 2024
Cornell University - arXiv, 2022