Learning Adversarial Transformer for Symbolic Music Generation.

IEEE transactions on neural networks and learning systems(2023)

引用 42|浏览91
暂无评分
摘要
Symbolic music generation is still an unsettled problem facing several challenges. The complete music score is a quite long note sequence, which consists of multiple tracks with recurring elements and their variants at various levels. The transformer model, benefiting from its self-attention has shown advantages in modeling long sequences. There have been some attempts at applying the transformer-based model to music generation. However, previous works train the model using the same strategy as the text generation task, despite the obvious differences between the pattern of texts and musics. These models cannot consistently produce music samples of high quality. In this article, we propose a novel adversarial transformer to generate transformer to generate music pieces with high musicality. The generative adversarial learning and the self-attention networks are combined creatively. The generation of long sequence is guided by the adversarial objectives, which provides a strong regularization to enforce the transformer to focus on learning of the global and local structures. Instead of adopting the time-consuming Monte Carlo (MC) search method that is commonly used in the existing sequence generative models, we propose an effective and convenient method to compute the reward for each generated step (REGS) for the long sequence. The discriminator is trained to optimize the elaborately designed global and local loss objective functions simultaneously, which enables the discriminator to give reliable REGS for the generator. The adversarial objective combined with the teacher forcing objective is used to guide the training of the generator. The proposed model can be used to generate single-track or multitrack music pieces. Experiments show that our model can generate long music pieces with the improved quality compared with the original music transformers.
更多
查看译文
关键词
Adversarial learning,long sequence generation,music generation,self-attention,transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要