Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019
Support Vector Machines, SVM, are one of the most popular machine learning models for supervised problems and have proved to achieve great performance in a wide broad of predicting tasks. However, they can suffer from scalability issues when working with large sample sizes, a common situation in the big data era. On the other hand, Deep Neural Networks (DNNs) can handle large datasets with greater ease and in this paper we propose Deep SVM models that combine the highly non-linear feature processing of DNNs with SVM loss functions. As we will show, these models can achieve performances similar to those of standard SVM while having a greater sample scalability.
Integrated Computer-Aided Engineering, 2020
Kernel based Support Vector Machines, SVM, one of the most popular machine learning models, usually achieve top performances in two-class classification and regression problems. However, their training cost is at least quadratic on sample size, making them thus unsuitable for large sample problems. However, Deep Neural Networks (DNNs), with a cost linear on sample size, are able to solve big data problems relatively easily. In this work we propose to combine the advanced representations that DNNs can achieve in their last hidden layers with the hinge and ϵ insensitive losses that are used in two-class SVM classification and regression. We can thus have much better scalability while achieving performances comparable to those of SVMs. Moreover, we will also show that the resulting Deep SVM models are competitive with standard DNNs in two-class classification problems but have an edge in regression ones.
2013
In this paper we describe a novel extension of the support vector machine, called the deep support vector machine (DSVM). The original SVM has a single layer with kernel functions and is therefore a shallow model. The DSVM can use an arbitrary number of layers, in which lower-level layers contain support vector machines that learn to extract relevant features from the input patterns or from the extracted features of one layer below. The highest level SVM performs the actual prediction using the highest-level extracted features as inputs. The system is trained by a simple gradient ascent learning rule on a min-max formulation of the optimization problem. A two-layer DSVM is compared to the regular SVM on ten regression datasets and the results show that the DSVM outperforms the SVM.
Recently, fully-connected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing , and bioinformatics. For classification tasks, most of these " deep learning " models employ the softmax activation function for prediction and minimize cross-entropy loss. In this paper, we demonstrate a small but consistent advantage of replacing the soft-max layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. While there have been various combinations of neu-ral nets and SVMs in prior art, our results using L2-SVMs show that by simply replacing softmax with linear SVMs gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop's face expression recognition challenge.
Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.
This writing summarizes and reviews on the paper that reveals the importance of Big Data for Deep Learning:ImageNet Classification with Deep Convolutional Neural Networks.
International Journal of Science and Business, 2020
With the development of the big data age, deep learning developed to become having a more complex network structure and more powerful feature learning and feature expression abilities than traditional machine learning methods. The model trained by the deep learning algorithm has made remarkable achievements in many large-scale identification tasks in the field of computer vision since its introduction. This paper first introduces the deep learning, and then the latest model that has been used for image classification by deep learning are reviewed. Finally, all used deep learning models in the literature have been compared to each other in terms of accuracy for the two most challenging datasets CIFAR-10 and CIFAR-100. IJSB
Journal of Al-Qadisiyah for Computer Science and Mathematics
Deep learning is a branch of machine learning that focuses on the development and refinement of complex neural networks for data analysis, prediction, and decision-making. Deep learning models use numerous layers of artificial neurons to automatically extract important features from raw data, making them superior at many tasks to typical machine learning models. Deep learning models' success in these fields has enhanced state-of-the-art performance and created new research and application prospects. Deep learning has been popular due to its capacity to tackle complicated issues in computer vision, natural language processing, speech recognition, and decision-making. In this study, we discuss deep learning techniques and applications, including recurrent neural networks, long short-term memory, convolutional neural networks, generative adversarial networks, and autoencoders. We also demonstrate deep learning's use in various fields. Deep learning has transformed artificial in...
IAEME PUBLICATION, 2020
Deep learning techniques are widely extended to numerous scientific disciplines and information technology like voice recognition, object definitions, and learning processes in visual processing. Likewise, conventional data analysis methods have many constraints of processing massive amount of information. Deep Learning is actually a rather emerging field in neural networks and culture of artificial intelligence. It has gained enormous success in important field technologies like Machine learning, Voice and Video Analysis, and Machine Translation Processing. There are huge quantities of information produced by numerous sources daily. Therefore, the data concept is translated to Analytics that poses difficulties in the phases of knowledge processing and judgment-making. Furthermore, Big Data analytics needs new and advanced methodologies depending on system and deep learning methods to analyze data in real-time with high reliability and productivity. Deep learning skills can promote the handling of such information, particularly their capacity to handle both the marked and unmarked data which are sometimes amply gathered in Big Data. The paper provides a comprehensive of Big Data and discusses particular problems in analytics that Deep Learning can solve.
With the recent advancement in digital technologies, the size of data sets has become too large in which traditional data processing and machine learning techniques are not able to cope with effectively . However, analyzing complex, high dimensional, and noise-contaminated data sets is a huge challenge, and it is crucial to develop novel algorithms that are able to summarize, classify, extract important information and convert them into an understandable form .
Journal of Imaging, 2021
Features play a crucial role in computer vision. Initially designed to detect salient elements by means of handcrafted algorithms, features now are often learned using different layers in convolutional neural networks (CNNs). This paper develops a generic computer vision system based on features extracted from trained CNNs. Multiple learned features are combined into a single structure to work on different image classification tasks. The proposed system was derived by testing several approaches for extracting features from the inner layers of CNNs and using them as inputs to support vector machines that are then combined by sum rule. Several dimensionality reduction techniques were tested for reducing the high dimensionality of the inner layers so that they can work with SVMs. The empirically derived generic vision system based on applying a discrete cosine transform (DCT) separately to each channel is shown to significantly boost the performance of standard CNNs across a large and ...
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, aswell as the opportunities provided by the AI applications in various decision making domains.
2019
A major challenge in neural network is computationally and memory intensive. To solve this difficult we explained deep neural network. In machine learning models, we explained and compared Deep Neural Networks (DNN’s) and Deep learning methods. This paper mainly contains the Deep Compression in three stages of pipeline. Such us trained quantization, Huffman coding and pruning. In this method, compressed the neural networks are done without affecting accuracy. The main aim is to maximize the energy and storage, and its required to run interprets on such large networks. Both compression and learning algorithms are discussed. We estimated the large scale deep neural network applications using multiple GPU machines. Various datasets are compared in this survey.
2019
Deep learning is the recent technology which is effectively applied to feature learning in image classification, feature learning and language processing. However, recent deep learning models process with vector space which produces failure when learning features from non-linear distribution of heterogeneous data. Large amount of labeled data can be required for supervised learning of recurrent neural networks (RNN). These large volume of training samples are time consuming and high cost. One direction of addressing this problem is to extract features from unlabeled data. This paper proposes a deep computation model for feature learning on big data to learn underlying data distribution using Deep Recurrent Neural Network based weighted Softmax regression (DRNNWSR) with no need of labeled instances. The proposed approach is moderately simple, however achieves accuracy comparable to that of more advanced techniques. The proposed strategy is significantly easier to train, contrasted wi...
The forecast of frost occurrence requires complex decision analysis that uses conditional probabilities. Due to frost events, the production of crops and flowers gets reduced, and we must predict this event to minimize the damages. If the frost prediction results are accurate, then the damage caused by frost can be reduced. In this paper, an ensemble learning approach is used to detect frost events with Convolutional Neural Network (CNN). We have used this to get more efficient and accurate results. Frost events need to be predicted earlier so that the farmer can take on-time precautionary measures. So, for measurement and analysis of Google Play, we have scrapped a dataset of the Agricultural category from different genres and collected the top 550 application of each category of Agricultural applications with 70 attributes for each category. The prediction of frost events prior few days of an actual frost event with an accuracy of 98.86%.
Deep learning is currently an extremely active research area in pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, Deep learning is coming to play a key role in providing big data predictive analytics solutions.Big data assist ML algorithms to uncover more fine-grained patterns and helps in accurate predictions .The major challenges to ML are model scalability and distributed computing.The realization of this grand potential relies on the ability to extract value from such massive data through data analytics; Deep learning is at its core because of its ability to learn from data and provide data driven insights, decisions, and predictions. In this paper first, we review the Deep learning techniques and highlight some promising learning methods in recent studies, such as representation learning, deep learning, distributed and parallel learning, transfer learning, active learning, and kernel-based learning and analyse the challenges and possible solutions of Deep learning for big data. Finally, we outline several open issues and research trends.
The Deep learning architectures fall into the widespread family of machine learning algorithms that are based on the model of artificial neural network. Rapid advancements in the technological field during the last decade have provided many new possibilities to collect and maintain large amount of data. Deep Learning is considered as prominent field to process, analyze and generate patterns from such large amount of data that used in various domains including medical diagnosis, precision agriculture, education, market analysis, natural language processing, recommendation systems and several others. Without any human intervention, deep learning models are capable to produce appropriate results, which are equivalent, sometime even more superior even than human. This paper discusses the background of deep learning and its architectures, deep learning applications developed or proposed by various researchers pertaining to different domains and various deep learning tools.
Archives of Computational Methods in Engineering, 2019
Nowadays, deep learning is a current and a stimulating field of machine learning. Deep learning is the most effective, supervised, time and cost efficient machine learning approach. Deep learning is not a restricted learning approach, but it abides various procedures and topographies which can be applied to an immense speculum of complicated problems. The technique learns the illustrative and differential features in a very stratified way. Deep learning methods have made a significant breakthrough with appreciable performance in a wide variety of applications with useful security tools. It is considered to be the best choice for discovering complex architecture in high-dimensional data by employing back propagation algorithm. As deep learning has made significant advancements and tremendous performance in numerous applications, the widely used domains of deep learning are business, science and government which further includes adaptive testing, biological image classification, computer vision, cancer detection, natural language processing, object detection, face recognition, handwriting recognition, speech recognition, stock market analysis, smart city and many more. This paper focuses on the concepts of deep learning, its basic and advanced architectures, techniques, motivational aspects, characteristics and the limitations. The paper also presents the major differences between the deep learning, classical machine learning and conventional learning approaches and the major challenges ahead. The main intention of this paper is to explore and present chronologically, a comprehensive survey of the major applications of deep learning covering variety of areas, study of the techniques and architectures used and further the contribution of that respective application in the real world. Finally, the paper ends with the conclusion and future aspects.
Principia: An International Journal of Epistemology, 2022
Although deep learning has historically deep roots, with regard to the vast area of artificial intelligence and, more specifically, to the study of machine learning and artificial neural networks, it is only recently that this line of investigation has developed fruits with great commercial value, starting to have thus a significant impact on society. It is precisely because of the wide applicability of this technology nowadays that we must be alert, in order to be able to foresee the negative implications of its indiscriminate uses. Of fundamental importance, in this context, are the risks associated with collecting large amounts of data for training neural networks (and for other purposes too), the dilemma of the strong opacity of these systems, and issues related to the misuse of already trained neural networks, as exemplified by the recent proliferation of deepfakes. This text introduces and discusses these issues with a pedagogical bias, thus aiming to make the topic accessible to new researchers interested in this area of application of scientific models.
International Journal of Trend in Scientific Research and Development, 2018
Deep learning is an emerging research area in machine learning and pattern recognition field. Deep learning refers to machine learning techniques that use supervised or unsupervised strategies to automatically learn hierarchical representations in deep architectures for classification. The objective is to discover more abstract features in the higher levels of the representation, by using neural networks which easily separates the various explanatory factors in the data. In the recent years it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. This paper presents a brief overview of deep learning, techniques, current research efforts and the challenges involved in it.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.