A-Muze-Net: Music Generation by Composing the Harmony Based on the Generated Melody

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We present a method for the generation of Midi files of piano music. The method models the right and left hands using two networks, where the left hand is conditioned on the right hand. This way, the melody is generated before the harmony. The Midi is represented in a way that is invariant to the musical scale, and the melody is represented, for the purpose of conditioning the harmony, by the content of each bar, viewed as a chord. Finally, notes are added randomly, based on this chord representation, in order to enrich the generated audio. Our experiments show a significant improvement over the state of the art for training on such datasets, and demonstrate the contribution of each of the novel components.

Original languageAmerican English
Title of host publicationMultiMedia Modeling - 28th International Conference, MMM 2022, Proceedings
EditorsBjörn Þór Jónsson, Cathal Gurrin, Minh-Triet Tran, Duc-Tien Dang-Nguyen, Anita Min-Chun Hu, Binh Huynh Thi Thanh, Benoit Huet
PublisherSpringer Science and Business Media Deutschland GmbH
Pages557-568
Number of pages12
ISBN (Print)9783030983574
DOIs
StatePublished - 1 Jan 2022
Event28th International Conference on MultiMedia Modeling, MMM 2022 - Phu Quoc, Viet Nam
Duration: 6 Jun 202210 Jun 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13141 LNCS

Conference

Conference28th International Conference on MultiMedia Modeling, MMM 2022
Country/TerritoryViet Nam
CityPhu Quoc
Period6/06/2210/06/22

Keywords

  • Midi processing
  • Music generation
  • Recurrent neural networks

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • General Computer Science

Cite this