Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
This chapter conceives the history of neural networks emerging from two millennia of attempts to rationalise and formalise the operation of mind. It begins with a brief review of early classical conceptions of the soul, seating the mind in the heart; then discusses the subsequent Cartesian split of mind and body, before moving to analyse in more depth the twentieth century hegemony identifying mind with brain; the identity that gave birth to the formal abstractions of brain and intelligence we know as 'neural networks'. The chapter concludes by analysing this identity-of intelligence and mind with mere abstractions of neural behaviour-by reviewing various philosophical critiques of formal connectionist explanations of 'human understanding', 'mathematical insight' and 'consciousness'; critiques which, if correct, in an echo of Aristotelian insight, suggest that cognition may be more profitably understood not just as a result of [mere abstractions of] neural firings, but as a consequence of real, embodied neural behaviour, emerging in a brain, seated in a body, embedded in a culture and rooted in our world; the so called 4Es approach to cognitive science: the Embodied, Embedded, Enactive, and Ecological conceptions of mind. Contents 1. Introduction: the body and the brain 2. First steps towards modelling the brain 3. Learning: the optimisation of network structure 4. The fall and rise of connectionism 5. Hopfield networks 6. The 'adaptive resonance theory' classifier 7. The Kohonen 'feature-map' 8. The multi-layer perceptron 9. Radial basis function networks 10. Recent developments in neural networks 11. "What artificial neural networks cannot do .." 12. Conclusions and perspectives Glossary Nomenclature References Biographical sketches
[Proceedings] 1992 IEEE International Conference on Systems, Man, and Cybernetics, 1992
The past two years have seen substantial progress in artificial neural networks(ANNs). New ANN control designs, combining reinforcement learning and generalized backpropagation[l,21, have demonstrated success on large-scale d-world problems which could not be solved by earlier designs, neural or nonneural. "here now exists a ladder of designs, rising up from simple designs (of limited capability, but a good starting point), through to complex proven designs (which give more power and flexibility after they are mastered), up to new untested designs and ideas which should be able to replicate and explain human intelligence at the highest possible level. After a brief review of neurocontrol, and an explanation of reinforcement learning, this paper asks what imdigitions these designs have for our understanding of the human mind. It argues that this new mathematics is fully compatible with older deep insights into the human mind, due to humanistic thinkers East and West. One m a y therefore hope that this new view of the human mind may be of value in unifying important strands of human culture. From an engineering point of view, the human brain is simply a computer, an information processing system. The function of any computer, as a whole system, is to compute its outputs. The outputs of the brain are control signals to muscles and glands. Therefore the brain as a whole svstem is a neurocontroller (a neural net control system) [3]. To understand the brain in functional, mathematical terms we should therefore focus on the subject of neurocontrol.
2018
Ali Rahimi, best paper award recipient at NIPS 2017, labelled the current state of Deep Learning (DL) headway as “alchemy”. Yann LeCun, one of the prominent figures in the DL R&D, was insulted by this expression. However, in his response, LeCun did not claimed that DL designers know how and why their DL systems reach so surprising performances. The possible reason for this cautiousness is: No one knows how and in which way system input data is transformed into semantic information at the system’s output. And this, certainly, has its own reason: No one knows what information is! I dare to offer my humble clarification about this obscure and usually untouchable matter. I hope someone would be ready to line up with me.
Technological Forecasting and Social Change, 1991
Journal of the Society of Dyers and Colourists, 1998
Models of Neurons and Perceptrons: Selected Problems and Challenges, 2018
The rapid growth of computational power of computers is one of the basic qualities in the development of computer science. Therefore, informatics is applied to solving more and more complex problems and, what follows, the demand for bigger and more complex software occurs. It is not always possible, however, to use classical algorithmic methods to create such a type of software. There are two reasons for it. First of all, a good model of the relation between the input and output parameters often either does not exist at all or it cannot be created at the present level of scientific knowledge. It is worth of mentioning that the algorithmic approach requires the knowledge of the explicit form of the mapping between the aforementioned sets of parameters. Secondly, even if the model is given, the algorithmic approach can be impossible regarding its over-complexity. It can be both complexity of the task on the stage of the algorithm creating, and too slow working of the implemented system. The latter one is a critical parameter especially in the on-line systems. Therefore, the alternative approaches, in comparison with the classical algorithmic approach, are developed intensively. Artificial neural networks are included into this group of methods. The neurophysiological studies of functional properties of nervous systems enabled researchers at the beginning of the 1940's to formulate the cybernetic model of the neuron [58] which, slightly modified, is commonly used up to present. At the turn of 1950's and 1960's the first artificial neural systems-PERCEPTRON and ADALINE were constructed. They were electromechanical systems. The first algorithms for setting of the synaptic weights in such a type of systems were worked out. Those pioneering attempts attracted attention to the possibilities of such systems. At the same time, however, significant limits were discovered. Nowadays, from the perspective of the time, it is known that on the one hand, the limits were caused by the lack of proper mathematical models of neural networks. On the other hand, they were caused by the application of just one type of artificial neural networks-the multilayer
ArXiv, 2021
While Artificial Neural Networks (ANNs) have yielded impressive results in the realm of simulated intelligent behavior, it is important to remember that they are but sparse approximations of Biological Neural Networks (BNNs). We go beyond comparison of ANNs and BNNs to introduce principles from BNNs that might guide the further development of ANNs as embodied neural models. These principles include representational complexity, complex network structure/energetics, and robust function. We then consider these principles in ways that might be implemented in the future development of ANNs. In conclusion, we consider the utility of this comparison, particularly in terms of building more robust and dynamic ANNs. This even includes constructing a morphology and sensory apparatus to create an embodied ANN, which when complemented with the organizational and functional advantages of BNNs unlocks the adaptive potential of lifelike networks. Introduction How can Artificial Neural Networks (ANN...
The 2011 International Joint Conference on Neural Networks, 2011
There has been important new cross-disciplinary work using neural network mathematics to unify key issues in engineering, technology, psychology and neuroscience-and many opportunities to create a discrete revolution in science by pushing this work further. This strain of research has a natural link to clinical and subjective human experience-the "first person science" of the mind. This paper discusses why and how, and gives several examples of links between neural network models and key phenomena in human experience, such as Freud's "psychic energy," the role of traumatic experience, the interpretation of dreams and creativity and the cultivation of human potential and sanity in general, and the biological foundations of language.
Neural Networks and Deep Learning A Textbook, 2018
One of the greatest mysteries of science is in the elusiveness of knowing exactly how the brain and thus the mind makes thought possible. The phenomenon of unlocking the secrets of the brain and therefore understanding its fundamental areas of function represents one of the greatest challenges of our time. According to most neural network researchers, the objective of AI is to seek an understanding of nature's capabilities for which the human race can engineer solutions to problems that cannot be solved by traditional computing. This paper takes a basic looks at the physical biological brain as the motivation behind AI.
International Journal of Creative Research Thoughts, 2017
The present study insights on the impact of artificial intelligence, coupled with neural networks. It has emerged as one of the tools for executing data science problems and performing rapid programming and processing. The neural network is recognized to be a mathematical function that converts a set of inputs into designated output components. These inputs might be the basis of computational learning. As there has been significant usage of these applications related to neural networks, the present study describes its importance in artificial intelligence.
Journal of the Society of Dyers and Colourists, 1998
Journal of Chemical Education, 1994
Sensor Review, 1997
Arti cial neural networks (ANNs), or connectionist systems, are composed of simple non-linear processing units (`neurons') connected into networks. They may be regarded, furthermore, as systems which adapt their functionality as a result of exposure to information (`training'). They have become enormously popular for data analysis over the past decade. Applications have included speech recognition, face recognition, DNA sequence labelling, time series prediction and medical data analysis to name but a few 13, 4, 9]. There are good reasons for this popularity and for hence utilising them, but there are also good reasons for not using them : ANNs o er no`magic' solution to problems and their internal`workings' are often obscure to the user. As such, then, they would appear to be, at best, an unreliable tool and, at worst, downright misleading. Why is it then that they have achieved such notoriety that even hardened statisticians (such as 11]) have become interested in their use? To answer this question we rst consider the two basic forms of data inference, classi cation and regression (prediction is regarded as a subset within the latter).
Reviews of Modern Physics, 1999
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.