Multimodal representation learning over heterogeneous networks for tag-based music retrieval

Expert Systems with Applications(2022)

引用 5|浏览31
暂无评分
摘要
Learning how to represent data represented by features obtained from multiple modalities through representation learning strategies has received much attention in Music Information Retrieval. Among several sources of information, musical data can be represented mainly by features extracted from acoustic content, lyrics, and metadata that concentrate complementary information and have relevance when discriminating the recordings. In this work, we propose a new method for learning multimodal representations structured as a heterogeneous network capable of incorporating different musical features in constructing a representation and exploring the similarity simultaneously. Our multimodal representation is centered on the information of tags extracted from a state-of-the-art neural language model and, in a complementary way, the audio represented by the melspectrogram. We submitted our method to a robust evaluation process composed of 10,000 queries with different scenarios and model parameter variations. Besides, we compute the Mean Average Precision and compare the representation proposed to representations built only with audio or tags obtained from a pre-trained neural model. The proposed method achieves the best results in all evaluated scenarios and emphasizes the discriminative power of multimodality can add to musical representations.
更多
查看译文
关键词
Music representation learning,Multimodal representation learning,Music information retrieval,Tag-based music retrieval
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要