Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021, International Journal of Advance Research and Innovative Ideas in Education
…
7 pages
1 file
An innovative approach that generates a playlist for the user according to his /her mood. In today's modern world music has become an integral and crucial part of human life and advanced technologies. Hard part of listening a song according to our mood is to find the apt song which can be overcome by using advanced CNN techniques which precisely detects users emotions. Problems like detection and location of faces in a cluttered image, facial feature extraction and expression classification should be detected by Facial Expression Recognition system. The model after training precisely classifies the emotions in the category of angry, happy, sad, neutral.
International Journal of Computer Science and Mobile Computing, 2021
Visual sentiment analysis, which investigates humans' emotional responses to visual stimuli such as images and videos, has been a fascinating and challenging problem. It attempts to recognize the high-level content of visual data. The success of current models can be attributed to the development of robust computer vision algorithms. The majority of existing models attempt to solve the problem by recommending either robust features or more complex models. The main proposed inputs are visual features from the entire image or video. Local areas have received little attention, which we believe is important to the emotional response of humans to the entire image. Image recognition is used to find people in images and analyze their sentiments or emotions. The CNN algorithm is used in this project to accomplish this task. Given an image, it will search for faces, identify them, place a rectangle in their positions, and describe the emotion found with a percentage of emotions displayed. The emotion output will be in audio format. After detecting emotions based on predicted emotions, songs stored in the system is automatically played, such as a happy song when the emotion is happy and a sad song when the emotion is sad.
IRJET, 2023
This research paper presents a development of a web-application with a facial recognition system that uses computer vision, algorithms and machine learning approaches to effectively determine a user’s emotions in real-time. The system interprets facial features such as the eyes, mouth, and forehead to detect emotions, including happiness, sadness, neutral, and rock. Based on the detected emotion and the selected language and singer, the system recommends songs that best fit the user’s mood and preferences. To train the deep learning model, FER-2013 dataset of labeled facial images is used. The system is implemented in real- time video input, providing personalized recommendations to the user based on their mood. The proposed system has the potential to revolutionize the way we listen to music and enhance our wellbeing. By providing personalized recommendations based on the user’s emotions and preferences, the system could improve the user’s music listening experience and mood. To evaluate the system’s performance we conducted experiments using a dataset of labeled facial images. The results showed that the system accurately detects emotions, with an average accuracy of 81conducted a user study to evaluate the system’s effectiveness in providing personalized recommendations. The results showed that the system was successful in providing recommendations that matched the user’s mood and preferences
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
Music is used in one's everyday life to modulate, enhance, and plummet undesirable emotional states like stress, fatigue, or anxiety. Even though in today's modern world, with the ever-increasing advancement in the area of multimedia content and usage, various intuitive music players with various options have been developed, the user still has to manually browse through the list of songs and select songs that complement and suit his or her current mood. This paper aims to solve this issue of the manual finding of songs to suit one's mood along with creating a high accuracy CNN model for Facial emotion recognition. Through the webcam, the emotional state can be deduced from facial expressions. To create a neural network model, the CNN classifier was used. This model is trained and tested using OpenCV to detect mood from facial expressions. A playlist will be generated through the system according to the mood predicted by the model. The model used for predicting the mood obtained an accuracy of 97.42% with 0.09 loss on the training data from the FER2013 Dataset.
Research Square (Research Square), 2023
Music always has a special connection to our emotions. It is a method of connecting people all over the world through pure emotions. When all of this is considered, it is exceedingly difficult to generalize music and state that everyone would enjoy the same type. Music recommendation based on mood is greatly needed because they will assist humans in relieving stress and listening to soothing music based on their current emotions. Its primary goal is to accurately predict the user's mood, and songs are played by the application based on the user's selection as well as his current mood. This bot recognizes human emotions from facial images using Human-Computer Interaction (HCI). The extraction of facial elements from the user's face is another critical factor. We used the CNN Algorithm to accurately detect the user's face in the live webcam feed and to detect emotion based on facial attributes such as the arrangement of the user's mouth and eyes. There is also a questionnaire where we can directly select a good option.
IRJET, 2020
Recent studies ensure that humans respond and react to music which music incorporates a high impact on person brain activity. The common yank listens up to four hours of music daily. Individuals tend to concentrate on music that supported their mood and interests. This project focuses on making Associate in Nursing application to counsel songs for user-supported their mood by capturing facial expressions. Facial features could be a type of nonverbal communication. Pc vision is Associate in Nursing knowledge domain field that helps convey a high-level understanding of digital pictures or videos to computers. In this system, PC vision elements square measure accustomed to verify the user's feeling through facial expressions. Once the feeling is recognized, the system suggests a play-list for that feeling, saving a great deal of your time for a user over choosing and taking part in songs manually. Emotion-Based Music Player additionally keeps track of user's details like the number of plays for every song, types of songs supported class and interest level, and reorganizes the play-list on every occasion. The system additionally notifies user concerning the songs that square measure so that they will be deleted or changed. Listening to music affects the human brain activities. Feeling based mostly music player with the machine-driven listing will facilitate users to take care of a selected emotional state. This analysis proposes Associate in Nursing feeling based mostly music player that creates playlists supported captured photos of the user. This enhances the system's potency, faster and automatic. The most goal is to cut back the general machine time and also the cost of the designed system. It additionally aims at increasing the accuracy of the system. The foremost vital goal is to create an amendment to the mood of a person if it's a negative one like unhappy, depressed. This model is valid by testing the system against user-dependent and user freelance dataset. The methodology of finding this downside is to make a completely useful app (Front End and Back finish) that solves this downside, ranging from the face there a straightforward and understandable interface anyone will use, this interface is connected to the rear finish. On the back finish, the most rule during this project is to make a Convolutional Neural Network help within the goal of achieving high accuracy rate, as a result of the Convolutional Neural Network is the best within the science of building any network that works with pictures, and conjointly their area unit plenty of similar analysis papers that achieved o.k. seen success during this field of analysis. A completely useful app that engineered to unravel this downside (Desktop Only) and conjointly trained nearly 28000 pictures with totally different states of emotions (Happy, Sad, Angry, and Normal) with an high accuracy rate that is, "85%" for coaching and "83%" for testing rate, the application is with success suggesting music by suggesting single songs that match any user's feeling.
International Journal for Research in Applied Science and Engineering Technology (IJRASET), 2022
Traditional methods of playing music according to a person's mood required human interaction. Migrating to computer vision technology will enable the automation of the such system. This article describes the implementation details of a real-time facial feature extraction and emotion recognition system. One way to do this is to compare selected facial features from an image to a face database. Recognizing emotions from images has become one of the active research topics in image processing and applications based on human-computer interaction. The expression on the face is detected using a convolutional neural network (CNN) for classifying human emotions from dynamic facial expressions in real time. The FER dataset is utilized to train the model which contains approximately 30,000 facial RGB images of different expressions. Expression-based music players aim to scan and interpret data and build playlists accordingly. It will suggest a playlist of songs based on your facial expressions. This is an additional feature of the existing feature of a music player.
International Journal for Research in Applied Science and Engineering Technology
Journal of Informatics Electrical and Electronics Engineering (JIEEE), A2Z Journals, 2021
We propose a new approach for playing music automatically using facial emotion. Most of the existing approaches involve playing music manually, using wearable computing devices, or classifying based on audio features. Instead, we propose to change the manual sorting and playing. We have used a Convolutional Neural Network for emotion detection. For music recommendations, Pygame & Tkinter are used. Our proposed system tends to reduce the computational time involved in obtaining the results and the overall cost of the designed system, thereby increasing the system's overall accuracy. Testing of the system is done on the FER2013 dataset. Facial expressions are captured using an inbuilt camera. Feature extraction is performed on input face images to detect emotions such as happy, angry, sad, surprise, and neutral. Automatically music playlist is generated by identifying the current emotion of the user. It yields better performance in terms of computational time, as compared to the algorithm in the existing literature.
IRJET, 2021
Recent studies confirm that humans respond and react to music which music encompasses a high impact on person's brain activity. the common American listens up to four hours of music on a daily basis. Everyone wants to pay attention music of their individual taste, mostly supported their mood. Users always face the task of manually browsing the music and to form a playlist supported their current mood. The proposed project is incredibly efficient which generates a music playlist supported this mood of users. Facial expressions are the most effective way of expressing ongoing mood of the person. the target of this project is to suggest songs for users supported their mood by capturing facial expressions. Facial expressions are captured through webcam and such expressions are fed into learning algorithm which provides most probable emotion. Once the emotion is recognized, the system suggests a playlist for that emotion, thus saves plenty of your time for a user. Once the emotion is detected by CNN then the emotion is employed by Spotify API then the Spotify API generates a playlist according the emotion of the user. stock prices accurately. The experiment results show that prediction accuracy is over 80%.
IRJET, 2021
Facial expression is one of the most difficult and highly convoluted procedures that have been undertaken in the image processing paradigm. Facial expression can be used for other purposes such as recognizing a person's mood as humans convey most of their emotions through their facial expressions. Identification of a person's mood is one of the most useful implementations as it can be used in various applications to improve the quality of life for an individual. Therefore, for this purpose, there has been an extensive analysis of the related works in this survey article for the purpose of reaching our approach for song recommendation through the mood analysis of the individual. Our prescribed approach utilizes convolutional neural networks along with fuzzy classification for effective song recommendation. This approach will be expanded in future research articles on this topic.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Advance Research Ideas and Innovations in Technology
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2021
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
International Journal of Computer Applications
International Journal of Research and Innovation in Applied Science
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2022
mantech publications, 2023
International Journal of Computer Applications, 2016
2019 International Conference on Signal Processing and Communication (ICSC), 2019