Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
AI
The paper explores the evolution and applications of Artificial Neural Networks (ANNs), detailing historical developments from early theoretical foundations laid by Warren McCulloch and Walter Pitts to contemporary advancements in Deep Learning that have spurred innovations in robotics and machine perception. It highlights significant milestones, including the enhancement of neural network capabilities and their integration into various technologies, illustrating the profound impact of neural networks on computing and artificial intelligence.
Reviews of Modern Physics, 1999
Artificial intelligence has been the inspiration and goal of computing since the discipline was first conceived by Alan Turing. Our understanding of the brain has increased in parallel with the development of computers capable of modelling its functions. While the human brain is vastly complex, too much so for the computation abilities of modern super computers, interesting results have been found while modelling the nervous system of smaller creatures such as the salamander [3].
2007
Neurocomputing is a comprehensive computational paradigm inspired by mechanisms of neural sciences and brain functioning that is rooted in learning instead of preprogrammed behavior. In this sense, neurocomputing becomes fundamentally different from the paradigm of programmed, instruction-based models of optimization. Artificial neural networks (neural networks, for short) exhibit some characteristics of biological neural networks in the sense the constructed networks include some components of distributed representation and processing as well as rely on various schemes of learning during their construction. The generalization capabilities of neural networks form one of their most outstanding features. The ability of the neural networks to generalize, namely, develop solutions that are meaningful beyond the scope of the learning data is commonly exploited in various applications. From the architectural standpoint, a neural network consists of a collection of simple nonlinear processing components called neurons, which are combined together via a net of adjustable numeric connections. The development of a neural network is realized through learning. This means to choose an appropriate network structure and a learning procedure to achieve the goals of the application intended. Neural networks have been successfully applied to a variety of problems in pattern recognition, signal prediction, optimization, control, and image processing. Here, we summarize the most essential architectural and development aspects of neurocomputing. Computational Model of Neurons A typical mathematical model of a single neuron (Anthony and Barlet, 1999) comes in the form of an n-input single-output nonlinear mapping (Fig. B1) described as follows: y ¼ f ð X n i¼1 w i x i Þ ð B1Þ
Artificial Neural Networks - Architectures and Applications, 2013
This chapter conceives the history of neural networks emerging from two millennia of attempts to rationalise and formalise the operation of mind. It begins with a brief review of early classical conceptions of the soul, seating the mind in the heart; then discusses the subsequent Cartesian split of mind and body, before moving to analyse in more depth the twentieth century hegemony identifying mind with brain; the identity that gave birth to the formal abstractions of brain and intelligence we know as 'neural networks'. The chapter concludes by analysing this identity-of intelligence and mind with mere abstractions of neural behaviour-by reviewing various philosophical critiques of formal connectionist explanations of 'human understanding', 'mathematical insight' and 'consciousness'; critiques which, if correct, in an echo of Aristotelian insight, suggest that cognition may be more profitably understood not just as a result of [mere abstractions of] neural firings, but as a consequence of real, embodied neural behaviour, emerging in a brain, seated in a body, embedded in a culture and rooted in our world; the so called 4Es approach to cognitive science: the Embodied, Embedded, Enactive, and Ecological conceptions of mind. Contents 1. Introduction: the body and the brain 2. First steps towards modelling the brain 3. Learning: the optimisation of network structure 4. The fall and rise of connectionism 5. Hopfield networks 6. The 'adaptive resonance theory' classifier 7. The Kohonen 'feature-map' 8. The multi-layer perceptron 9. Radial basis function networks 10. Recent developments in neural networks 11. "What artificial neural networks cannot do .." 12. Conclusions and perspectives Glossary Nomenclature References Biographical sketches
Expert Systems, 1990
A neural network is a collection of neurons that are interconnected and interactive through signal processing operations. The traditional term "neural network" refers to a biological neural network, i.e., a network of biological neurons. The modern meaning of this term also includes artificial neural networks, built of artificial neurons or nodes. Machine learning includes adaptive mechanisms that allow computers to learn from experience, learn by example and by analogy. Learning opportunities can improve the performance of an intelligent system over time. One of the most popular approaches to machine learning is artificial neural networks. An artificial neural network consists of several very simple and interconnected processors, called neurons, which are based on modeling biological neurons in the brain. Neurons are connected by calculated connections that pass signals from one neuron to another. Each connection has a numerical weight associated with it. Weights are the basis of long-term memory in artificial neural networks. They express strength or importance for each neuron input. An artificial neural network "learns" through repeated adjustments of these weights.
2021
Recent developments in the technological domain have increased the interactions between artificial and natural spheres, leading to a growing interest in the ethical, legal and philosophical implications of AI research. The present paper aims at creating an interdisciplinary discussion on issues raised by the use and the implementation of artificial intelligence algorithms, robotics, and applied solutions in the neuroscience and biotechnology field. Building on the findings of the webinar “Workshop neuroni artificial e biologici: etica e diritto”, this work explores the issues discussed in the workshop, it attempts to show both the existing challenges and opportunities and it seeks to propose ways forward to overcome some of the investigated problems.
In our book “Neural Engineering: Representation, Transformations and Dynamics”, MIT Press 2003, Chris Eliasmith and I present a unified framework that describes the function of neurobiological systems through the application of the quantitative tools of systems engineering. Our approach is not revolutionary, but more evolutionary in nature, building on many current and generally disparate approaches to neuronal modeling. The basic premise is that the principles of information processing apply to neurobiological systems.
2008
Biological brains and engineered electronic computers fall into different categories. Both are examples of complex information processing systems, but beyond this point their differences outweigh their similarities. Brains are flexible, imprecise, error-prone and slow; computers are inflexible, precise, deterministic and fast. The sets of functions at which each excels are largely non-intersecting. They simply seem to be different types of system.
IEEE Canadian Review, 2003
L'incroyable, souvent subtile complexité des systèmes neuronaux semble être le cauchemar de l'ingénieur. Mais quand on examine ces systèmes soigneusement, ils peuvent devenir le rêve de l'ingénieur -un moyen pour étudier des systèmes robustes et complexes. En utilisant des techniques de la théorie de l'information, théorie du contrôle, et l'analyse des signaux et systèmes, il est possible de formuler un cadre pour construire de grandes et biologiquement plausibles simulations de systèmes neuronaux. Ces simulations nous aident à apprendre comment fonctionnent les systèmes neuronaux sous-jacent et comment obtenir de bonnes solutions aux complexes problèmes faisant face à ces systèmes.
The first few pages of any good introductory book on neurocomputing contain a cursory description of neurophysiology and how it has been abstracted to form the basis of artificial neural networks as we know them today. In particular, artificial neurons simplify considerably the behavior of their biological counterparts. It is our view that in order to gain a better understanding of how biological systems learn and remember it is necessary to have accurate models on which to base computerized experimentation. In this paper we describe an artificial neuron that is more realistic than most other models used currently. The model is based on conventional artificial neural networks (and is easily computerized) and is currently being used in our investigations into learning and memory.
Proceedings of the IEEE, 2000
Ever since the publication of Santiago Ramón y Cajal's drawings of neurons - in his words, those "mysterious butterflies of the soul" - it has been clear that the nervous system is composed of a large number of such cells connected to one another to form a network. Long axons, ending in terminals which form synapses to the dendrites which branch out from neighbouring neurons, transmit bursts of electric current and enable neurons somehow to cooperate and yield the astonishing emergent phenomenon known as thought.
Patterns, 2021
Three dissimilar methodologies in the field of artificial intelligence (AI) appear to be following a common path toward biological authenticity. This trend could be expedited by using a common tool, artificial nervous systems (ANS), for recreating the biology underpinning all three. ANS would then represent a new paradigm for AI with application to many related fields.
In this research project, the features of biological and artificial neural networks were studied by reviewing the existing works of authorities in print and electronics on biological and artificial neural networks. The features were then assessed and evaluated and comparative analysis of the two networks was carried out. The metrics such as structures, layers, size and functional capabilities of neurons, learning capabilities, style of computation, processing elements, processing speed, connections, strength, information storage, information transmission, communication media selection, signal transduction and fault tolerance were used as basis for comparison. A major finding in the research showed that artificial neural networks served as the platform for neuro-computing technology and as such a major driver of the development of neuron-like computing system. It was also discovered that Information processing of the future computer systems will greatly be influenced by the adoption of artificial neural network model.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.