Papers by Mohammad Faridul Haque Siddiqui

2017 International Conference on Computational Science and Computational Intelligence (CSCI), 2017
Multimodal interaction is a type of Human-Computer Interaction, which involves a combination of m... more Multimodal interaction is a type of Human-Computer Interaction, which involves a combination of multifarious modalities to effectuate a task. Human-to-human interactions necessitate the use of all modalities such as speech, gestures, facial expressions and assorted media available for brainy communications. The single modalities often introduce the ambivalent interpretation of the ideas being conveyed. Multimodal interaction plays a major role in resolving such ambiguities. The various modalities used can be combined as a way to recover the semantics of the messages involved. This strategy, known as data fusion is the main focus of the research presented. This paper proposes a new genetic algorithm based approach for achieving intelligent data fusion for inferred contextfree grammars. Grammar inference methods are applied to generate the grammars, and the resulting production rules are fused to generate a correct grammar. Related results, their implications, and future work are pres...

Sensors, 2019
Over the past two decades, automatic facial emotion recognition has received enormous attention. ... more Over the past two decades, automatic facial emotion recognition has received enormous attention. This is due to the increase in the need for behavioral biometric systems and human–machine interaction where the facial emotion recognition and the intensity of emotion play vital roles. The existing works usually do not encode the intensity of the observed facial emotion and even less involve modeling the multi-class facial behavior data jointly. Our work involves recognizing the emotion along with the respective intensities of those emotions. The algorithms used in this comparative study are Gabor filters, a Histogram of Oriented Gradients (HOG), and Local Binary Pattern (LBP) for feature extraction. For classification, we have used Support Vector Machine (SVM), Random Forest (RF), and Nearest Neighbor Algorithm (kNN). This attains emotion recognition and intensity estimation of each recognized emotion. This is a comparative study of classifiers used for facial emotion recognition alon...

Multimodal Technologies and Interaction
Multimodal human–computer interaction (HCI) systems pledge a more human–human-like interaction be... more Multimodal human–computer interaction (HCI) systems pledge a more human–human-like interaction between machines and humans. Their prowess in emanating an unambiguous information exchange between the two makes these systems more reliable, efficient, less error prone, and capable of solving complex tasks. Emotion recognition is a realm of HCI that follows multimodality to achieve accurate and natural results. The prodigious use of affective identification in e-learning, marketing, security, health sciences, etc., has increased demand for high-precision emotion recognition systems. Machine learning (ML) is getting its feet wet to ameliorate the process by tweaking the architectures or wielding high-quality databases (DB). This paper presents a survey of such DBs that are being used to develop multimodal emotion recognition (MER) systems. The survey illustrates the DBs that contain multi-channel data, such as facial expressions, speech, physiological signals, body movements, gestures, a...

Multimodal Technologies and Interaction
The exigency of emotion recognition is pushing the envelope for meticulous strategies of discerni... more The exigency of emotion recognition is pushing the envelope for meticulous strategies of discerning actual emotions through the use of superior multimodal techniques. This work presents a multimodal automatic emotion recognition (AER) framework capable of differentiating between expressed emotions with high accuracy. The contribution involves implementing an ensemble-based approach for the AER through the fusion of visible images and infrared (IR) images with speech. The framework is implemented in two layers, where the first layer detects emotions using single modalities while the second layer combines the modalities and classifies emotions. Convolutional Neural Networks (CNN) have been used for feature extraction and classification. A hybrid fusion approach comprising early (feature-level) and late (decision-level) fusion, was applied to combine the features and the decisions at different stages. The output of the CNN trained with voice samples of the RAVDESS database was combined...

Sensors (Basel, Switzerland), 2018
Extensive possibilities of applications have made emotion recognition ineluctable and challenging... more Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduce...
Uploads
Papers by Mohammad Faridul Haque Siddiqui