Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, International Journal for Research in Applied Science and Engineering Technology IJRASET
https://doi.org/10.22214/ijraset.2020.5023…
9 pages
1 file
The human face is an important organ of the human body and plays a major role in transmitting the individual's emotional and emotional state. Extremely isolating a song list and producing an appropriate playlist based on individual spiritual qualities is a very tiring, time-consuming, hard-working, and energetic activity. Various algorithms have been proposed and developed to facilitate the playlist implementation process. However the proposed algorithms available for implementation are slow, extremely precise and sometimes require the use of additional equipment such as EEG or sensors. This proposed model based on a facelifted model will generate playlists automatically thereby reducing the effort and time involved in handing over the process. Thus the proposed system tends to reduce the amount of time involved in obtaining the results and the total cost of the system, thereby increasing the accuracy of the entire program.
International Journal of Linguistics and Computational Applications, 2018
Nowadays, almost everything is computerized, which defines the term "digital world". Without computers we can't even survive these days. In order to interact with computer, blue eyes technology is introduced. It is a system programmed with perceptual and sensory abilities, i.e., "giving human abilities to systems". This helps the computer to sense one's mood and current position (in terms of feelings and needs) with the help of their facial expression and touching mouse. The technology uses image processing, face recognition, age tracking and speech recognition techniques. This paper explains a new technique known as emotion sensory technique which helps the system to identify the user's mood (say happy, sad, angry, surprise and etc.) with cloud storage and authentication. This technology helps in developing a more user friendly and effective communication between human beings and computer.
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
In the day to day life of a human and in the technologies that is emerging, music is incredibly significant. Conventionally, it is the work of a user to search for a song manually from the list of songs in a music player. Here, an efficient and accurate model is introduced to generate a playlist on the basis of users present emotional state and behaviour. Existing methods for automated playlist building are computationally inefficient, inaccurate, and can involve the usage of additional gear such as EEG or sensors. Speech is the most primitive and natural form of communicating feelings, emotions, and mood, and its processing is computationally intensive, time-consuming, and expensive. This suggested system uses real-time face emotion extraction as well as audio feature extraction.
International journal of Emerging Trends in Science and Technology, 2016
The most recognizable part of a human body is the human face itself. The human face also plays an extremely significant role in extraction of an individual's current emotional state. Music can definitely be a mood changer, but manually segregating the list of songs and creating playlists according to the individual's emotional state can be very exhaustive, time consuming, boring and labour intensive work. To automate the playlist generation as well as saving humans from a lot of boring hardship a number of algorithms have been proposed as well as developed. However these algorithms have their own drawbacks, they are computationally very slow, less accurate and some of them even require additional hardware like EEG, sensors etc. which can obviously add up to the overall cost. Through this paper we propose a system based on facial expression extraction which would automate the process of playlist generation thereby reducing the effort as well as time required to segregate the list of songs. And moreover the proposed system tends to reduce the computational time in obtaining the results, improving the overall accuracy as well as reducing the designing cost too.
2021
The human face is an important part of an individual's body and extracting the required input from the human face can now be done directly using a camera. One of the applications of this input can be for extracting the information to deduce the mood of an individual. This data can then be used to get a list of songs that comply with the mood derived from the input provided earlier. This eliminates the time-consuming and tedious task of manually segregating or grouping songs into different lists and helps in generating an appropriate playlist based on an individual's emotional features. Various algorithms have been developed and proposed for automating the playlist generation process. The expressions of a person are detected by extracting the facial features using the HAAR classifier and Fisherface algorithm. The results show that the proposed system achieves up to 88.82% of accuracy level in recognizing the expressions.
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2023
This research constructs a face emotion framework that can examine fundamental human facial expression. The approach suggested was used by humans to classify the humans' mood and eventually to play the audio file that links to human emotion using this result. First of all, the device takes the face of the human being as a part of the process. It is carried out facial recognition. After this, the human face can be recognized using attribute extraction techniques. This way the emotion of humans can be identified using the image element. Those signature points are located by the extraction of tongue, mouth and eyebrows, eyebrow. If the input face precisely matches the emotion dataset face, we will detect individual feelings to play the emotional audio file. Training with a small range of characteristics faces can gain recognition in varying environmental conditions. An easy, effective and reliable solution is proposed. In the field of identification and detection, system plays a very important part.
mantech publications, 2023
Music is the mirror of one's emotion. In simple words, we can say that music relates to a person's state of emotion or mood. We observe that some new technology and new methods are arising every day, and competing to this, we have proposed a music player based on the user's emotion through facial expression. Emotion can be recognized from a person's face easily by others. What if we can implement this method (i.e. emotion detection) on a machine?. We have developed an application that detects emotion through the machine's webcam and displays the set of songs commonly said as playlists based on emotion. Here, in this paper, we have used CNN classifier for neural network model and OpenCV to detect faces and train model to detect emotion.
Music plays a key role in reducing stress, building self-esteem, improving health etc. It can basically be divided into a number of different genres. People tend to select the specific music genre on the basis of their mood and interests. Hence there is really a need of a platform which automatically suggest music on the basis of emotions of an individual. Facial expressions can basically act as a form of nonverbal communication which can convey information about various moods of an individual. Hence my work, Emophony basically focusses on creation of an application to suggest songs to the end user based on their emotions and interests by capturing their face expressions. I have designed an artificially intelligent system which is capable of recognising emotions through facial expressions. Once the mood is recognized, the system then suggests a playlist of a particular genre based on the respective emotion. It will ultimately save a lot of time which otherwise is spent in searching, selecting and playing songs manually.
International Journal of Engineering Research and Technology (IJERT), 2018
https://www.ijert.org/review-on-facial-expression-based-music-player https://www.ijert.org/research/review-on-facial-expression-based-music-player-IJERTCONV6IS15116.pdf Human often use nonverbal cues such as hand gestures, facial expressions, and tone of the voice to express feelings in interpersonal communications. The face of the human is an important organ of an individual's body and it plays an important role in extraction of an individual's behavior and emotional state. Facial expression provides current mind state of person. It is very time consuming and difficult to create and manage large playlists and to select songs from these playlists. Thus, it would be very helpful if the music player itself selects a song according to the current mood of the user. Manually segregating the list of songs associated, generating acceptable playlist supported an individual's emotions could be a terribly tedious, time overwhelming, intensive and upheld task Thus, an application can be developed to minimize these efforts of managing playlists. However the proposed existing algorithms in use are computationally slow and less accurate. This proposed system based on facial expression extracted will generate a playlist automatically thereby reducing the effort and time involved in rendering the process manually. Facial expressions are given using inbuilt camera. The image is captured using camera and that image is passed under different stages to detect the mood or emotion of the user. We will study about how to automatically detect the mood of the user and present him a playlist of songs which is suitable for his current mood. Proposed paper has used Viola-Jones algorithm and multiclass SVM (Support Vector Machine) for face detection and emotion detection respectively.
2017
Song Players with automatic song shuffling capability for mobile/personal computer/handheld computer are widely available and most of them accepts user feedback to identify user's mood and play songs as per their mood. A key improvement area in this approach is with the requirement for manual user input to the application to determine the current emotional state of the user. The onus is thus on the user to mark his present emotional state and hence doesn't cater for any dynamism in the emotions of the user. This paper introduces an approach to add automated human emotion recognition mechanism with an active updated music provider which provides for the user to get an automated and seamless Song Shuffler. Facial Action Coding System devised by Carl-Herman Hjortsjöis the basis of the human emotion recognition aspect of this system. Music content used will be reviewed both by the user and also be based on the user's emotional change as a feedback to the music. "Face is...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Engineering Applied Sciences and Technology
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2024
International Journal for Research in Applied Science and Engineering Technology, 2021
International Journal of Research and Innovation in Applied Science
International Journal of Engineering Applied Sciences and Technology
Journal 4 Research - J4R Journal, 2017
International Journal of Innovative Technology and Exploring Engineering, 2022
International Journal for Research in Applied Science and Engineering Technology (IJRASET), 2022
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2021
International Journal of Computer Applications
International Journal of Advance Research Ideas and Innovations in Technology