Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017
…
11 pages
1 file
Even as machines known as “deep neural networks” have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-learning” algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).
2019
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from "shallow" to "deep" learning architectures. A precise account of "success" is given, in order to sieve out aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow , even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors' take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed.
ResearchGate, 2018
Deep Learning has been one of the most discussed and researched topic since the last decade. Deep Learning, without a doubt, greatly advanced the field of artificial intelligence and related applications. However, its said that, why and how deep learning works, are yet to fully understand. Although Deep Learning works so well for many areas and is getting better at performing day by day, many researchers doubt about the future prospects of deep learning towards mankind. Many are complaining about uses of this methodology which is behaving like a black box while providing surprising results. Therefore, we venture a study to find understandings on why deep learning works. We found and discussed some cases here, where its been claimed to find understandings of why deep learning works and why deep learning may fail.
2018
First and foremost, I would like to thank the six informants who, without hesitation, answered yes when I asked them to contribute to this thesis. Thank you for contributing with your time, energy and intellect! Secondly, I want to express my gratitude and admiration for my supervisors, The Dynamic Duo, Associate Professor Eva M. L. Björk and Professor Kåre Solfjeld. Thank you for your enthusiasm, encouragement, support, and faster-than-lightning responses! A big thank you to Bjørn Bolstad who kindly let me borrow the title from one of his blog posts. That title perfectly sums up what I set out to investigate in this thesis. To my partner John Rune: Thank you for being a computer-wiz and helping me out with all sorts of tech-trouble, but most of all for believing in me when I did not. Lastly, to my family, friends, and colleagues who have been cheering me on. Thank you for your interest and support! And yes, I will be taking weekends off from now on.
In this paper we will be discussing about the concepts of Deep Learning (DL).Deep learning has become an extremely active research area in machine learning and pattern recognition society. It has gained huge success in the field of speech recognition, computer vision and language processing. This paper will contain the fundamental concepts of deep learning along with a list of the current and future applications.
Currently, we use a huge amount of parameters and extremely complicated Deep Neural Networks to assure strong performances on datasets. Indeed, most of them can achieve great results on some tasks, but they are still black boxes and we do not know the details of their processes. I start from a single sample to introduce a novel theory for deep learning networks. I prove that the essence of deep learning networks is a large amount of matrixes' multiplication. In particular, each sample has its matrix which could enable sample matching the result of it, and deep learning networks memory some of their corresponding matrixes. The process of training deep learning is seeking a set of parameters that could generate as many as possible matrixes which match the samples. If a model could memory many samples' matrix, this model could get a good result on a test set called generalization. Otherwise, the model is called overfitting or not fitting. I present empirical results showing that my theory is right on Cifar-10 Krizhevsky [2012].
IRJET, 2022
A new field of machine learning (ML) study is deep learning. There are numerous concealed artificial neural network layers in it. The deep learning methodology uses high level model abstractions and transformations in massive databases. Deep learning architectures have recently made major strides in a variety of domains, and these developments have already had a big impact on artificial intelligence. Additionally, the advantages of the layer-based hierarchy and nonlinear operations of deep learning methodology are discussed and contrasted with those of more traditional techniques in widely used applications. It also has a significant impact on face recognition methods, as demonstrated by Facebook's highly effective Deep Face technology, which enables users to tag photos.
2018
Deep neural networks are powerful machine learning approaches that have exhibited excellent results on many classification tasks. However, they are considered as black boxes and some of their properties remain to be formalized. In the context of image recognition, it is still an arduous task to understand why an image is recognized or not. In this study, we formalize some properties shared by eight state-of-the-art deep neural networks in order to grasp the principles allowing a given deep neural network to classify an image. Our results, tested on these eight networks, show that an image can be sub-divided into several regions (patches) responding at different degrees of probability (local property). With the same patch, some locations in the image can answer two (or three) orders of magnitude higher than other locations (spatial property). Some locations are activators and others inhibitors (activation-inhibition property). The repetition of the same patch can increase (or decreas...
In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
APSIPA Transactions on Signal and Information Processing
In this paper we look at recent advances in artificial intelligence. Decades in the making, a confluence of several factors in the past few years has culminated in a string of breakthroughs in many longstanding research challenges. A number of problems that were considered too challenging just a few years ago can now be solved convincingly by deep neural networks. Although deep learning appears to be reducing the algorithmic problem solving to a matter of data collection and labeling, we believe that many insights learned from ‘pre-Deep Learning’ works still apply and will be more valuable than ever in guiding the design of novel neural network architectures.
Cognitive Systems Research, 2018
This paper historically attempts to map the significant success of deep neural networks in notably varied classification problems and application domains with near human-level performance. The paper also addresses the various doubts surrounding the acceptance of deep learning as a science of future. The manuscript attempts to unveil the hidden capabilities of deep neural networks in enabling machines perform the human way tasks which can be learned through what we call observation and experience.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Neural Networks and Deep Learning A Textbook, 2018
Handbook of Research on Applications and Implementations of Machine Learning Techniques
Cornell University - arXiv, 2018
Studies in Computational Intelligence, 2017
The Research Publication, 2018