Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2018, Neural Computing and Applications
…
18 pages
1 file
In addition to traditional tasks such as prediction, classification and translation, deep learning is receiving growing attention as an approach for music generation, as witnessed by recent research groups such as Magenta at Google and CTRL (Creator Technology Research Lab) at Spotify. The motivation is in using the capacity of deep learning architectures and training techniques to automatically learn musical styles from arbitrary musical corpora and then to generate samples from the estimated distribution. However, a direct application of deep learning to generate content rapidly reaches limits as the generated content tends to mimic the training set without exhibiting true creativity. Moreover, deep learning architectures do not offer direct ways for controlling generation (e.g., imposing some tonality or other arbitrary constraints). Furthermore, deep learning architectures alone are autistic automata which generate music autonomously without human user interaction, far from the objective of interactively assisting musicians to compose and refine music. Issues such as: control, structure, creativity and interactivity are the focus of our analysis. In this paper, we select some limitations of a direct application of deep learning to music generation, analyze why the issues are not fulfilled and how to address them by possible approaches. Various examples of recent systems are cited as examples of promising directions. * To appear in Special Issue on Deep learning for music and audio, Neural Computing & Applications, Springer Nature, 2018. 1 With many variants such as convolutional networks, recurrent networks, autoencoders, restricted Boltzmann machines, etc. [GBC16].
2019
We are interested in using deep learning models to generate new music. Using the 1 Maestro Dataset, we will use an LSTM architecture that inputs tokenized Midi 2 files and outputs predictions for note. Our accuracy will be measured by taking 3 predicted noted and comparing those to ground truths. Using A.I. for music is a 4 relatively new area of study, and this project provides an investigation into creating 5 an effect model for the music industry. 6
IRJET, 2022
This paper describes automatic music generation. This is done using the concept of deep learning. The generation of music is in the form of sequence of ABC notes. Music technology is currently realistic for use of large scale data. Mostly for music generation using deep learning LSTM or GRU's are used for modelling. As far as music generation is considered it is similar to sequence generation. LSTM most efficiently generates sequence hence use of this would be best.
2021
This paper explores the idea of utilising Long Short-Term Memory neural networks (LSTMNN) for the generation of musical sequences in ABC notation. The proposed approach takes ABC notations from the Nottingham dataset and encodes it to beefed as input for the neural networks. The primary objective is to input the neural networks with an arbitrary note, let the network process and augment a sequence based on the note until a good piece of music is produced. Multiple tunings have been done to amend the parameters of the network for optimal generation. The output is assessed on the basis of rhythm, harmony, and grammar accuracy
ArXiv, 2021
Generating a complex work of art such as a musical composition requires exhibiting true creativity that depends on a variety of factors that are related to the hierarchy of musical language. Music generation have been faced with Algorithmic methods and recently, with Deep Learning models that are being used in other fields such as Computer Vision. In this paper we want to put into context the existing relationships between AI-based music composition models and human musical composition and creativity processes. We give an overview of the recent Deep Learning models for music composition and we compare these models to the music composition process from a theoretical point of view. We have tried to answer some of the most relevant open questions for this task by analyzing the ability of current Deep Learning models to generate music with creativity or the similarity between AI and human composition processes, among others.
Machine Learning
Deep learning methods are recognised as state-of-the-art for many applications of machine learning. Recently, deep learning methods have emerged as a solution to the task of automatic music generation (AMG) using symbolic tokens in a target style, but their superiority over non-deep learning methods has not been demonstrated. Here, we conduct a listening study to comparatively evaluate several music generation systems along six musical dimensions: stylistic success, aesthetic pleasure, repetition or self-reference, melody, harmony, and rhythm. A range of models, both deep learning algorithms and other methods, are used to generate 30-s excerpts in the style of Classical string quartets and classical piano improvisations. Fifty participants with relatively high musical knowledge rate unlabelled samples of computer-generated and human-composed excerpts for the six musical dimensions. We use non-parametric Bayesian hypothesis testing to interpret the results, allowing the possibility o...
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2021
Advancement in deep neural networks have made it possible to compose music that mimics music composition by humans. The capacity of deep learning architectures in learning musical style from arbitrary musical corpora have been explored in this paper. The paper proposes a method for generated from the estimated distribution. Musical chords have been extracted for various instruments to train a sequential model to generate the polyphonic music on some selected instruments. We demonstrate a simple method comprising a sequential LSTM models to generate polyphonic music. The results of the model evaluation show that generated music is pleasant to hear and is similar to music played by humans. This has great application in entertainment industry which enables music composers to generate variety of creative music.
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2021
The music lyrics that we generally listen are human written and no machine involvement is present. Writing music has never been easy task, lot of challenges are involved to write because the music lyrics need to be meaningful and at the same time it needs to be in harmony and synchronised with the music being play over it. They are written by experienced artist who have been writing music lyrics form long time. This project tries to automate music lyrics generation using computerized program and deep learning which we produce lyrics and reduce the load on human skills and may generate new lyrics and a really faster rate than humans ever can. This project will generate the music with the assistance of human and AI
The wide-ranging impact of deep learning models implies significant application in music analysis, retrieval , and generation. Initial findings from musical application of a conditional restricted Boltzmann machine (CRBM) show promise towards informing creative computation. Taking advantage of the CRBM's ability to model temporal dependencies full reconstructions of pieces are achievable given a few starting seed notes. The generation of new material using figuration from the training corpus requires restrictions on the size and memory space of the CRBM, forcing associative rather than perfect recall. Musical analysis and information complexity measures show the musical encoding to be the primary determinant of the nature of the generated results.
IRJET, 2021
One of the exciting applications of recent advances in the field of AI is Artificial Music generation. Is it possible to reproduce artists' creativity through AI? Can a Deep Learning model be an inspiration or a productivity tool for musicians? To find the answers we tried to do a project on Music generation through AI. The overall idea of the project was to take some existing music data and then train a neural network model using this data. So, the model has to learn the patterns in the music that we human enjoy and once it learns this, the model should be able to generate new music for us. One important condition was that it cannot simply copy paste from the training data.
IEEE Access
Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of imitating musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. For instance, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Innovative Technology and Exploring Engineering, 2020
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP)
Proceedings of the SMC Conferences, 2019
Computational Intelligence and Neuroscience
Nature-Inspired Computation and Swarm Intelligence, 2020
Journal of Creative Music Systems
Arab Journal of Basic and Applied Sciences, 2019
Zenodo (CERN European Organization for Nuclear Research), 2022
International Journal of Machine Learning and Computing, 2020
2020
Journal of New Music Research, 2018