Academia.eduAcademia.edu

Turing's Error-revised

Many important lines of argumentation have been presented during the last decades claiming that machines cannot think like people. Yet, it has been possible to construct devices and information systems, which replace people in tasks which have previously been occupied by people as the tasks require intelligence. The long and versatile discourse over, what machine intelligence is, suggests that there is something unclear in the foundations of the discourse itself. Therefore, we critically studied the foundations of used theory languages. By looking critically some of the main arguments of machine thinking, one can find unifying factors. Most of them are based on the fact that computers cannot perform sense-making selections without human support and supervision. This calls attention to mathematics and computation itself as a representational constructing language and as a theory language in analysing human mentality. It is possible to notice that selections must be based on relevance, i.e., on why some elements of sets belong to one class and others do not. Since there is no mathematical justification to such selection, it is possible to say that relevance and related concepts are beyond the power of expression of mathematics and computation. Consequently, Turing erroneously assumed that mathematics and formal language is equivalent with natural languages. He missed the fact that mathematics cannot express relevance, and for this reason, mathematical representations cannot be complete models of the human mind. Preface This paper is of a programmatic nature. We fully acknowledge the enormous achievements of modern science, engineering and design by calling on the most advanced machine models, as we always did in the past and will continue into the future. We will not primarily discuss the physical limitations (Markov 2014), but instead focus on the conceptual constraints, the intuitive assumptions of underlying theories, resp. their foundations (Saariluoma 1997). We try to understand and find an answer to the fundamental question: Is the human mind capable of understanding itself beyond computability? We question the mainstream assumption that everything is (or at least should be) 'computational' (Chatelin 2012, Chalmers 1996, Sun, Wilson, and Lynch 2016). We argue that the foundations of such an assumption are (still) not fully justifiable. Therefore, we imitate Kant's (1781/1922) famous 'Copernican Revolution' from a kind of Wittgensteinian (1921/1974) perspective and ask whether the properties of the theory of language itself used in discourse can explain why the problems have proven to be so hard. In other words, we ask whether formal theory languages (i.e., logic, mathematics and computation) are powerful enough to express problems of human thinking and represent thoughts. Since many of the foundational issues concentrate on one theoretical construct, Turing machines (TM), we must once again consider whether people 'think like machines'.