Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
6 pages
1 file
Listening to music affects the human brain activities. Emotion based music player with automated playlist can help users to maintain a particular emotional state. This research proposes an emotion based music player that creates a playlists based on captured photos of the user. Manual sorting of a playlist and annotation of songs, in accordance with the current emotion, is more time consuming and quite tedious. Numerous algorithms have been implemented to automate this process. However, existing algorithms are slow, increase cost of the system by using additional hardware (e.g. EEG systems and sensors) and have quite very less accuracy. This paper presents an algorithm that not only automates the process of generating an audio playlist, but also to classify those songs which are newly added and the main task is to capture current mood of person and to play song accordingly. This enhances the system's efficiency, faster and automatic. The main goal is to reduce the overall computational time and the cost of the designed system. It also aims at increasing the accuracy of the system. The most important goal is to make change the mood of person if it is a negative one such as sad, depressed. This model is validated by testing the system against user dependent and user independent dataset.
IRJET, 2023
This paper proposes the implementation of an intelligent agent that segregates songs and plays them according to the user's current mood. The music that best matches the emotion is recommended to the user as a playlist. Face emotion recognition is a form of image processing. Facial emotion recognition is the process of converting the movements of a person's face into a digital database using various image processing techniques. Facial Emotion Recognition recognises the face's emotions. The collection of songs is based on the emotions conveyed by each song and then suggests an appropriate playlist. The user's music collection is initially clustered based on the emotion the song conveys. This is calculated taking into consideration the lyrics of the song as well as the melody. Every time the user wishes to generate a mood-based playlist, the user takes a picture of themselves at that instant. This image is subjected to facial detection and emotion recognition techniques, recognising the emotion of the user.
International Journal for Research in Applied Science and Engineering Technology IJRASET, 2020
The human face is an important organ of the human body and plays a major role in transmitting the individual's emotional and emotional state. Extremely isolating a song list and producing an appropriate playlist based on individual spiritual qualities is a very tiring, time-consuming, hard-working, and energetic activity. Various algorithms have been proposed and developed to facilitate the playlist implementation process. However the proposed algorithms available for implementation are slow, extremely precise and sometimes require the use of additional equipment such as EEG or sensors. This proposed model based on a facelifted model will generate playlists automatically thereby reducing the effort and time involved in handing over the process. Thus the proposed system tends to reduce the amount of time involved in obtaining the results and the total cost of the system, thereby increasing the accuracy of the entire program.
IRJET, 2020
Recent studies ensure that humans respond and react to music which music incorporates a high impact on person brain activity. The common yank listens up to four hours of music daily. Individuals tend to concentrate on music that supported their mood and interests. This project focuses on making Associate in Nursing application to counsel songs for user-supported their mood by capturing facial expressions. Facial features could be a type of nonverbal communication. Pc vision is Associate in Nursing knowledge domain field that helps convey a high-level understanding of digital pictures or videos to computers. In this system, PC vision elements square measure accustomed to verify the user's feeling through facial expressions. Once the feeling is recognized, the system suggests a play-list for that feeling, saving a great deal of your time for a user over choosing and taking part in songs manually. Emotion-Based Music Player additionally keeps track of user's details like the number of plays for every song, types of songs supported class and interest level, and reorganizes the play-list on every occasion. The system additionally notifies user concerning the songs that square measure so that they will be deleted or changed. Listening to music affects the human brain activities. Feeling based mostly music player with the machine-driven listing will facilitate users to take care of a selected emotional state. This analysis proposes Associate in Nursing feeling based mostly music player that creates playlists supported captured photos of the user. This enhances the system's potency, faster and automatic. The most goal is to cut back the general machine time and also the cost of the designed system. It additionally aims at increasing the accuracy of the system. The foremost vital goal is to create an amendment to the mood of a person if it's a negative one like unhappy, depressed. This model is valid by testing the system against user-dependent and user freelance dataset. The methodology of finding this downside is to make a completely useful app (Front End and Back finish) that solves this downside, ranging from the face there a straightforward and understandable interface anyone will use, this interface is connected to the rear finish. On the back finish, the most rule during this project is to make a Convolutional Neural Network help within the goal of achieving high accuracy rate, as a result of the Convolutional Neural Network is the best within the science of building any network that works with pictures, and conjointly their area unit plenty of similar analysis papers that achieved o.k. seen success during this field of analysis. A completely useful app that engineered to unravel this downside (Desktop Only) and conjointly trained nearly 28000 pictures with totally different states of emotions (Happy, Sad, Angry, and Normal) with an high accuracy rate that is, "85%" for coaching and "83%" for testing rate, the application is with success suggesting music by suggesting single songs that match any user's feeling.
Human expression plays a vital role in determining the current state and mood of an individual, it helps in extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead or even through the curve of the smile. Music is basically an art form that soothes and calms human brain and body. Taking these two aspects and blending them together our project deals with detecting emotion of an individual through facial expression and playing music according to the mood detected that will alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. By developing a recommendation system, it could assist a user to make a decision regarding which music one should listen to helping the user to reduce his/her stress levels. The user would not have to waste any time in searching or to look up for songs and the best track matching the user's mood is detected, and songs would be shown to the user according to his/her mood. The image of the user is captured with the help of a webcam. The user's picture is taken and then as per the mood/emotion of the user an appropriate song from the playlist of the user is shown matching the user's requirement.
IRJET, 2022
A user's emotion or mood can be detected by his/her facial expressions. These expressions can be derived from the live feed via the system's camera. A lot of research is being conducted in the field of Computer Vision and Machine Learning, where machines are trained to identify various human emotions or moods. Machine Learning provides various techniques through which human emotions can be detected. Music is a great connector. Music players and other streaming apps have a high demand as these apps can be used anytime, anywhere and can be combined with daily activities, traveling, sports, etc. People often use music as a means of mood regulation, specifically to change a bad mood, increase energy level or reduce tension. Also, listening to the right kind of music at the right time may improve mental health. Thus, human emotions have a strong relationship with music. In our proposed system, a mood-based music player is created which performs real time mood detection and suggests songs as per detected mood. The objective of this system is to analyze the user's image, predict the expression of the user and suggest songs suitable to detect mood.
Regular, 2020
This project presents a system to automatically detect emotional dichotomy and mixed emotional experience using a Linux based system. Facial expressions, head movements and facial gestures were captured from pictorial input in order to create attributes such as distance, coordinates and movement of tracked points. Web camera is used to extract spectral attributes. Features are calculated using Fisherface algorithm. Emotion detected by cascade classifier and feature level fusion was used to create a combined feature vector. Live actions of user are to be used for recording emotions. As per calculated result system will play songs and display books list.
International journal of Emerging Trends in Science and Technology, 2016
The most recognizable part of a human body is the human face itself. The human face also plays an extremely significant role in extraction of an individual's current emotional state. Music can definitely be a mood changer, but manually segregating the list of songs and creating playlists according to the individual's emotional state can be very exhaustive, time consuming, boring and labour intensive work. To automate the playlist generation as well as saving humans from a lot of boring hardship a number of algorithms have been proposed as well as developed. However these algorithms have their own drawbacks, they are computationally very slow, less accurate and some of them even require additional hardware like EEG, sensors etc. which can obviously add up to the overall cost. Through this paper we propose a system based on facial expression extraction which would automate the process of playlist generation thereby reducing the effort as well as time required to segregate the list of songs. And moreover the proposed system tends to reduce the computational time in obtaining the results, improving the overall accuracy as well as reducing the designing cost too.
International Journal for Modern Trends in Science and Technology, 2021
In this day and age, most people in the world love to listen to music. Music plays an important part in everyone's life. We use music for many reasons like relaxation, inspiration, and energy. It helps in expressing our feelings and emotions. It gives us relief and helps to reduce stress. This is proved by the fact that now we can see many people with earphones in their ears while taking an early morning walk, crossing the street, or even working. Many music applications were modified with different functionality and other implementations but still, I found an issue in most music player applications that they play songs randomly without reference to the frame of mind of the user. In this paper, we propose a system that will be able to play songs for the user according to his mood. Melody is a music player application that suggests an automated song playlist to the user after analyzing the mood of the user. We use the Android Studio and Google Firebase to store songs on the server and then play them in the Melody app according to the user's mood.
International Journal of Engineering Applied Sciences and Technology
The advanced approach that offers the user with automated generated playlist of songs based on the mood of the user. In today's world everyone uses the music to relax him or herself. To automate the Playlist generation process lots of algorithm were developed and proposed. Emotion Based Music Player aims at perusing and inferring the data from facial expressions and creating a Playlist based on the parameters extracted. Human moods are proposed for common understanding and sharing feelings and aims. Depending upon the current mood of the user the player automatically selects the song and plays it. The proposed system focuses on developing the Emotion Based Musing Player by detecting the human emotions through facial expression extraction technique. The proposed system works on Playlist generation and Classification of Emotions. The system is designed in such a way that the Facial expressions are captured through an inbuilt camera, analyze the extracted features of the image and determines the mood of the user and arranges the playlist accordingly.
2017
Song Players with automatic song shuffling capability for mobile/personal computer/handheld computer are widely available and most of them accepts user feedback to identify user's mood and play songs as per their mood. A key improvement area in this approach is with the requirement for manual user input to the application to determine the current emotional state of the user. The onus is thus on the user to mark his present emotional state and hence doesn't cater for any dynamism in the emotions of the user. This paper introduces an approach to add automated human emotion recognition mechanism with an active updated music provider which provides for the user to get an automated and seamless Song Shuffler. Facial Action Coding System devised by Carl-Herman Hjortsjöis the basis of the human emotion recognition aspect of this system. Music content used will be reviewed both by the user and also be based on the user's emotional change as a feedback to the music. "Face is...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2019 International Conference on Signal Processing and Communication (ICSC), 2019
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2021
International Journal for Research in Applied Science and Engineering Technology, 2021
International Journal for Research in Applied Science and Engineering Technology (IJRASET), 2021
International Journal for Research in Applied Science and Engineering Technology
2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009
Journal 4 Research - J4R Journal, 2017