Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017, Studies in Computational Intelligence
AI
This article provides an overview of Deep Neural Networks (DNN), detailing their historical evolution and operational principles. It describes how DNN hierarchically processes information to extract features, focusing on applications like face recognition while contrasting DNN approaches with traditional methods. The paper also discusses different learning modes of DNN, their inspiration from biological neural networks, and compares the storage capacities of human brains and artificial neural networks.
The different CNN models use many layers that typically include a stack of linear convolution layers combined with pooling and normalization layers to extract the characteristics of the images. Unlike these models, and instead of using a linear filter for convolution, the network in network (NiN) model uses a multilayer perception (MLP), a nonlinear function, to replace the linear filter. This article presents a new deep network in network (DNIN) model based on the NiN structure, NiN drag a universal approximator, (MLP) with rectified linear unit (ReLU) to improve classification performance. The use of MLP leads to an increase in the density of the connection. This makes learning more difficult and time learning slower. In this article, instead of ReLU, we use the linear exponential unit (eLU) to solve the vanishing gradient problem that can occur when using ReLU and to speed up the learning process. In addition, a reduction in the convolution filters size by increasing the depth is used in order to reduce the number of parameters. Finally, a batch normalization layer is applied to reduce the saturation of the eLUs and the dropout layer is applied to avoid overfitting. The experimental results on the CIFAR-10 database show that the DNIN can reduce the complexity of implementation due to the reduction in the adjustable parameters. Also the reduction in the filters size shows an improvement in the recognition accuracy of the model. Keywords Exponential linear unit (ELU) Á Convolutional neural networks (CNNs) Á Deep MLPconv Á Image recognition Á Network in Network (NiN)
Cornell University - arXiv, 2018
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly, and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bio-informatics, natural language processing (NLP), cybersecurity, and many others. This report presents a brief survey on the advances that have occurred in the area of DL, starting with the Deep Neural Network (DNN). The survey goes on to cover the Convolutional Neural Network (CNN), the Recurrent Neural Network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), the Auto-Encoder (AE), the Deep Belief Network (DBN), the Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have included recent developments such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on Deep Learning using Neural Networks [1, 38] and a survey on RL [234]. However, those papers have not discussed the individual advanced techniques for training large scale deep learning models and the recently developed method of generative models [1].
Learning is a process by which a system improves its performance from experience . Since 2006, deep learning is emerged as a new area of machine learning, impacting a wide range of signal and information processing work in both the traditional and the new scopes. Many traditional machine learning and signal processing techniques exploit shallow architectures, which contain a single layer of nonlinear feature transformation. Examples of shallow architectures are conventional hidden Markov models (HMMs), maximum entropy (MaxEnt) models, support vector machines (SVMs) , kernel regression, and multilayer perceptron (MLP) with a single hidden layer. A standard neural network (NN) consists of many simple, connected processors called neurons, each producing a sequence of real-valued activations. Input neurons get activated through sensors perceiving the environment, other neurons get activated through weighted connections from previously active neurons. Human information processing mechanisms (e.g., vision and speech), however, recommend the need of deep architectures for extracting complex structure and building internal representation from rich sensory inputs (e.g., natural image and its motion, speech, and music). It is natural to believe that the state of the art can be advanced in processing these types of media signals if efficient and effective deep learning algorithms are developed. Signal processing systems with deep architectures are composed of many layers of nonlinear processing stages, where each lower layer’s outputs are fed to its immediate higher layer as the input. Deep learning is a type of machine learning in which a model learns to perform classification tasks directly from images, text, or sound. Deep learning is usually implemented using a neural network architecture. The term “deep” refers to the number of layers in the network—the more layers, the deeper the network. Traditional neural networks contain only 2 or 3 layers, while deep networks can have hundreds. A few examples of deep learning at work: A self-driving vehicle slows down as it approaches a pedestrian crosswalk, An ATM rejects a counterfeit bank note, A smartphone app gives an instant translation of a foreign street sign. Deep learning is especially well-suited to identification applications such as face recognition, text translation, voice recognition, and advanced driver assistance systems, including, lane classification and traffic sign recognition In this paper we will see some of those techniques to achieve deep learning and their corresponding applications.
2018
In recent years the Deep Neural Networks (DNN) has been using widely in a big range of machine learning and data-mining purposes. This pattern recognition approach can handle highly nonlinear problems. In this work, three main contributions to DNN are presented. 1A method called Semi Parallel Deep Neural Networks (SPDNN) is introduced wherein several deep architectures are mixed and merged using graph contraction technique to take advantage of all the parent networks. 2The importance of data is investigated in several attempts and an augmentation technique know as Smart Augmentation is presented. 3To extract more information from a database, multiple works on Generative Adversarial Networks (GAN) are given wherein the joint distribution of data and its ground truth is approximated and in other projects conditional generators for classification and regression problems are trained and tested.
Path of Science, 2024
Deep learning (DL), a sophisticated subset of machine learning (ML), has emerged as a transformative force within the broader realm of artificial intelligence (AI). By leveraging architectures such as convolutional neural networks (CNNs), DL has significantly advanced image recognition capabilities, enabling systems to identify and classify visual data with remarkable precision accurately. This technology is not only applicable to image recognition. Still, it has also made strides in diverse areas, such as speech recognition, language translation, automated gameplay, healthcare diagnostics, and the development of self-driving vehicles. The success of DL in this domain can be attributed to its ability to learn hierarchical representations of data, allowing for improved feature extraction and pattern recognition. Despite its impressive performance, deep learning is not without its limitations. Key challenges include its reliance on vast amounts of labelled data, which can be difficult and expensive to obtain, its lack of common sense reasoning and difficulties in addressing complex, multifaceted problems.
Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term "deep"; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: hand- written digits (benchmark known as MNIST) and speech recognition.
IRJET, 2022
A new field of machine learning (ML) study is deep learning. There are numerous concealed artificial neural network layers in it. The deep learning methodology uses high level model abstractions and transformations in massive databases. Deep learning architectures have recently made major strides in a variety of domains, and these developments have already had a big impact on artificial intelligence. Additionally, the advantages of the layer-based hierarchy and nonlinear operations of deep learning methodology are discussed and contrasted with those of more traditional techniques in widely used applications. It also has a significant impact on face recognition methods, as demonstrated by Facebook's highly effective Deep Face technology, which enables users to tag photos.
2017
Deep learning is an emerging area of machine learning (ML) research. It comprises multiple hidden layers of artificial neural networks. The deep learning methodology applies nonlinear transformations and model abstractions of high level in large databases. The recent advancements in deep learning architec-tures within numerous fields have already provided significant contributions in artificial intelligence. This article presents a state of the art survey on the contributions and the novel applications of deep learning. The following review chronologically presents how and in what major applications deep learning algorithms have been utilized. Furthermore, the superior and beneficial of the deep learning methodology and its hierarchy in layers and nonlinear operations are presented and compared with the more conventional algorithms in the common applications. The state of the art survey further provides a general overview on the novel concept and the ever-increasing advantages and popularity of deep learning.
Deep Learning and Edge Computing Solutions for High Performance Computing, 2021
Oduntan Adeola, 2019
ABSTRACT A Deep Neural Network is an artificial neural network with multiple layers between the input and output layers. The architecture is inspired by the hierarchical structure of the brain. Deep neural networks feature a hierarchical, layer-wise arrangement of non-linear activation functions called neurons, fed by inputs into the network. Deep Neural Networks are typically feed-forward networks in which data flows from the input layer to the output layer without looping back. The term ‘deep’ refers to the number of hidden layers in the neural networks, while neural networks have two to three hidden layers, deep neural networks can have as many as thousands hidden layers (Nataniel K. and Jeff Brondy). The purpose of implementing a deep neural network is to find a transformation of data for making a decision. They serves as a quick methods to build classification and regression models that are very difficult to program. Some of the techniques that allow deep neural networks to solve problems are back propagation, which computes the partial derivatives of a function, Dropout for correcting the problems associated with over-fitting by combining the predictions of different large neural networks at test time, Max-pooling, Batch Normalization, Long Short-term Memory (LSTM), Transfer Learning, Continuous Bags of Words e.t.c. Deep neural networks have been applied to numerous fields including Computer vision, Speech recognition, Natural language processing (NLP), Audio recognition, Social network filtering, Machine translation, Bio-informatics, Drug-design, Medical image analysis and board game programs, where they have produced results comparable to human experts and in some cases superior to human experts (Karen Simonyan, 2014). The industries and areas to which deep neural networks can be applied to in the future are categorized into health, Agriculture, Banking, Multimedia e.t.c., on the other hand, it will serve numerous industries in technology roles such as reducing the lags and bandwidth bottlenecking that results from the internet plurality of media contents.
https://www.igi-global.com/, 2020
The chapter is about deep learning fundaments and its recent trends. The chapter mentions many advanced applications and deep learning models and networks to easily solve those applications in a very smart way. Discussion of some techniques for computer vision problem and how to solve with deep learning approach are included. After taking fundamental knowledge of the background theory, one can create or solve applications. The current state-of-the-art of deep learning for education, healthcare, agriculture, industrial, organizations, and research and development applications are very fast growing. The chapter is about types of learning in a deep learning approach, what kind of data set one can be required, and what kind of hardware facility is required for the particular complex problem. For unsupervised learning problems, Deep learning algorithms have been designed, but in the same way Deep learning is also solving the supervised learning problems for a wide variety of tasks.
Face recognition (FR), the process of identifying people through facial images, has numerous practical applications in the area of biometrics, information security, access control, law enforcement, smart cards and surveillance system. Convolutional Neural Networks (CovNets), a type of deep networks has been proved to be successful for FR. For real-time systems, some preprocessing steps like sampling needs to be done before using to CovNets. But then also complete images (all the pixel values) are passed as input to CovNets and all the steps (feature selection, feature extraction, training) are performed by the network. This is the reason that implementing CovNets are sometimes complex and time consuming. CovNets are at the nascent stage and the accuracies obtained are very high, so they have a long way to go. The paper proposes a new way of using a deep neural network (another type of deep network) for face recognition. In this approach, instead of providing raw pixel values as input, only the extracted facial features are provided. This lowers the complexity of while providing the accuracy of 97.05% on Yale faces dataset.
In recent years, deep learning has been used in image classification, object tracking, pose estimation, text detection and recognition, visual saliency detection, action recognition and scene labeling. Image Classification is widely used in various fields such as Plant leaf disease classification, facial expression classification. To make bulky images handy, image classification is done using the concept of a deep neural network. Among different type of models, Convolutionalneural networks has been demonstrated high performance on image classification. In this paper we builded a simple Convolutional neural network on image classification. This simple Convolutional neural network completed the image classification.The paper contributes a methodology for a more accurate classification of images instead of image feature extraction or image segmentation. The proposed work established a promising accuracy of 99.89%.
Deep learning is a new area of machine learning research. Deep learning technology applies the nonlinear and advanced transformation of model abstraction into a large database. The latest development shows that deep learning in various fields and greatly contributed to artificial intelligence so far. This article reviews the contributions and new applications of deep learning. The main target of this review is to give the summarize points for scholars to have the analysis about applications and algorithms. Then review tries to investigate the main applications and uses algorithms. In addition, the advantages of using the method of deep learning and its hierarchical and nonlinear functioning are introduced and compared to traditional algorithms in common applications. The following three criteria should be taken into consideration when choosing the area of application. (1) expertise or knowledge of the author; (2) the successful application of deep learning technology has changed the field of application, such as voice recognition, chat robots, search technology and vision; and (3) deep learning can have a significant impact on the application domain and benefit from recent research with natural language and text processing, information recovery and multimodal information processing resulting from multitasking deep learning. This review provides a general overview of a new concept and the growing benefits and popularity of deep learning, which can help researchers and students interested in deep learning methods.
With the recent advancement in digital technologies, the size of data sets has become too large in which traditional data processing and machine learning techniques are not able to cope with effectively . However, analyzing complex, high dimensional, and noise-contaminated data sets is a huge challenge, and it is crucial to develop novel algorithms that are able to summarize, classify, extract important information and convert them into an understandable form .
Pattern Recognition and Image Analysis, 2021
Training methods for deep neural networks (DNNs) are analyzed. It is shown that maximizing the likelihood function of the distribution of the input data P(x) in the space of synaptic connections of a restricted Boltzmann machine (RBM) is equivalent to minimizing the cross-entropy (CE) of the network error function and minimizing the total mean squared error (MSE) of the network in the same space using linear neurons. The application of DNNs for the detection and recognition of productmarking is considered.
Proceedings of the 3rd International Conference on Networking, Information Systems & Security, 2020
The objective of this paper is to summarize a comparative account of unsupervised and supervised deep learning models and their applications. The design of a model system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples and performance evaluation. Classification plays a vital role in deep learning algorithms and we found that, though the error backpropagation learning algorithm as provided by supervised learning model, is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the perception problem.
2018
Deep learning is a sub field of machine learning. Learning can be of supervised, semi-supervised and unsupervised. There are different types of architectures for deep learning . In this paper we are giving an overview of different architectures that are widely used and their application area. Deep learning is applied in many areas such as image processing, speech recognition, data mining, natural language processing, social network filtering, machine translation, bioinformatics and drug design. IndexTerms Deep learning ;deep learning architecture; machine learning _________________________________________________________________________________________________________________
In this paper we will be discussing about the concepts of Deep Learning (DL).Deep learning has become an extremely active research area in machine learning and pattern recognition society. It has gained huge success in the field of speech recognition, computer vision and language processing. This paper will contain the fundamental concepts of deep learning along with a list of the current and future applications.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.