Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, IEEE Intelligent Systems
…
3 pages
1 file
This special issue presents advancements in pattern recognition, focusing on visual data analysis, including image, video, and multimedia. It features various studies exploring new methodologies for tasks like visual categorization, facial expression recognition, and face sketch-photo matching, highlighting the significance of cross-domain learning, labeled data use, and descriptor development for improved recognition performance.
Journal of Intelligent Computing
Many approaches in both feature extraction and classification have been proposed in order to build a robust automatic facial expression recognition system. The chosen features and classifiers are usually compared in limited scenarios. In this paper, 5 feature extractors (3 variations of Gabor filters, Local Binary Patterns, and Discrete Cosine Transform) and 4 classifiers (K-Nearest Neighbors, Support Vector Machine, Radial Basis Function Neural Network, and Naive Bayes) were combined and applied to three different datasets: JAFFE, Yale, and CK+. The combinations of feature extractors and classifiers were compared in more robust settings, being evaluated in each dataset separately and in 3 cross-database settings, to verify the technique's generalization power. All experiments were performed in two validation scenarios: in the first one, the system tries to recognize the facial expression of a person already known to it in the training phase, using an image not present in the training set (leave-one-out); in the other scenario, all images of a certain individual are in the testing set, so the system tries to recognize the facial expression of a person unknown to it in the training phase (leave-one-subject-out). Accuracy results, as well as computational times, are presented, suggesting the combination of feature extractors and classifiers more suited for generalization.
International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING, 2024
Facial expression recognition is a crucial area of study in the field of computer vision. Research on nonverbal communication has shown that a significant amount of deliberate information is sent via facial expressions. Facial expression recognition is a crucial field in computer vision that deals with the significant impact of nonverbal communication. Expression recognition has lately been extensively used in the medical and advertising sectors. Difficulties in Facial Emotion Recognition. Facial emotion recognition is a technique that examines facial expressions in static photos and videos to uncover information about an individual's emotional state. The intricacy of facial expressions, the versatile use of the technology in any setting, and the incorporation of emerging technologies like artificial intelligence pose substantial privacy hazards. Facial expressions serve as non-verbal cues, offering indications of human emotions. Deciphering emotional expressions has been a focal point of study in psychology for many years. This study will examine several prior studies that have undertaken comprehensive facial analysis, including both total and partial face recognition, to identify expressions and emotions. The datasets and models used in previous studies, as well as the findings gained, show that employing the whole face yields more accuracy compared to using specific face-parts, which result in lower accuracy ratios. However, emotional identification often does not rely only on the whole face, since it is not always feasible to have the full face available. Contemporary research is now prioritising the identification of facial expressions based on certain facial features. Efficient deep learning algorithms, particularly the CNN algorithm, can do this task.
In previous works of ours (1-3), we proposed a neural net- work-based face detection and facial expression analysis sys- tem, which was able to classify three expressions in frontal view face images. In the present work, we examine the possi- bility of classifying these expressions in side view face im- ages. Specifically, we evaluate the extracted facial feature discrimination power of three image acquisition techniques, namely acquisition in (1) frontal view, (2) side view and (3) stereoscopic pairs. Our findings are important in the design of human-computer interaction systems and multimedia interac- tive services.
International Journal of Computational Intelligence and Applications, 2000
We compare the performance and generalization capabilities of different low-dimensional representations for facial emotion classification from static face images showing happy, angry, sad, and neutral expressions. Three general strategies are compared: The first approach uses the average face for each class as a generic template and classifies the individual facial expressions according to the best match of each template. The second strategy uses a multi-layered perceptron trained with the backpropagation of error algorithm on a subset of all facial expressions and subsequently tested on unseen face images. The third approach introduces a preprocessing step prior to the learning of an internal representation by the perceptron. The feature extraction stage computes the oriented response to six odd-symmetric and six even-symmetric Gabor-filters at each pixel position in the image. The template-based approach reached up to 75% correct classification, which corresponds to the correct recognition of three out of four expressions. However, the generalization performance only reached about 50%. The multi-layered perceptron trained on the raw face images almost always reached a classification performance of 100% on the test-set, but the generalization performance on new images varied from 40% to 80% correct recognition, depending on the choice of the test images. The introduction of the preprocessing stage was not able to improve the generalization performance but slowed down the learning by a factor of ten. We conclude, that a template-based approach for emotion classification from static images has only very limited recognition and generalization capabilities. This poor performance can be attributed to the smoothing of facial detail caused by small misalignments of the faces and the large inter-personal differences of facial expressions exposed in the data set. Although the nonlinear extraction of appropriate key features from facial expressions by the multi-layered perceptron is able to maximize classification performance, the generalization performance usually reaches only 60%.
Proceedings of the 12th International Conference on Agents and Artificial Intelligence, 2020
Facial expression recognition (FER) in the context of machine learning refers to a solution whereby a computer vision system can be trained and used to automatically detect the emotion of a person from a presented facial image. FER presents a difficult image classification problem that has received increasing attention over recent years mainly due to the availability of powerful hardware for system implementation and the greater number of possible applications in everyday life. However, the FER problem has not yet been fully resolved, with the diversity of captured facial images from which the type of expression or emotion is to be detected being one of the main obstacles. Ready-made image databases have been compiled by researchers to train and test the developed FER algorithms. Most of the reported algorithms perform relatively well when trained and tested on a single-database but offer significantly inferior results when trained on one database and then tested using facial images from an entirely different database. This paper deals with the cross-database FER problem by proposing a novel approach which aggregates local region features from the eyes, nose and mouth and selects the optimal classification techniques for this specific aggregation. The conducted experiments show a substantial improvement in the recognition results when compared to similar cross-database tests reported in other works. This paper confirms the idea that, for images originating from different databases, focus should be given to specific regions while less attention is paid to the face in general and other facial sections.
Multimedia Tools and Applications, 2013
Most facial expression recognition (FER) systems used facial expression data created during a short period of time and this data is used for learning/training of FER systems. There are many facial expression patterns (i.e. a particular expression can be represented in many different patterns) which cannot be generated and used as learning/training data in a short time. Therefore, in order to maintain its high accuracy and robustness for a long time of a facial expression recognition system, the classifier should be evolved adaptively over time and space. We proposed a facial expression recognition system that has the aptitude of incrementally learning and thus can learn all possible patterns of expressions that may be generated in feature. After extraction of region of interest (face), the system extracts Speeded-Up Robust Features (SURF).
Affective Computing, 2008
2014
— A crucial part for facial expression analysis is to capture a face deformation. In this work, we are interested by the employment of 3D facial surface normals (3DFSN) to classify six basic facial expressions and the proposed approach was employed on the Bosphorus database. We constructed a Principal Component Analysis (PCA) to capture variations in facial shape due to changes in expressions using 3DFSN as the feature vector. A modular approach is employed where a face is decomposed into six different regions and the expression classification for each module is carried out independently. We constructed a Weighted Voting Scheme (WVS) to infer the emotion underlying a collection of modules using a weight that is determined using the AdaBoost learning algorithm. Our results indicate that using 3DFSN as the feature vector of WVS yields a better performance than 3D facial points and 3D facial distance measurements in facial expression classification using both WVS and a Majority Voting ...
International Journal of Advance Research, Ideas and Innovations in Technology, 2021
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Multimedia, 2013
EURASIP Journal on Advances in Signal Processing, 2012
Signal Processing-image Communication, 2002
international journal for research in applied science and engineering technology ijraset, 2020
Image and Vision Computing, 2012
Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2018
Applied Computational Intelligence and Soft Computing