Predicting Human-Reported Enjoyment Responses in Happy and Sad Music

2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)(2019)

引用 4|浏览8
暂无评分
摘要
Whether in a happy mood or a sad mood, humans enjoy listening to music. In this paper, we introduce a novel method to identify auditory features that best predict listener-reported enjoyment ratings by splitting the features into qualitative feature groups, then training predictive models on these feature groups and comparing prediction performance. Using audio features that relate to dynamics, timbre, harmony, and rhythm, we predicted continuous enjoyment ratings for a set of happy and sad songs. We found that a distributed lag model with Ll regularization best predicted these responses and that timbre-related features were most relevant for predicting enjoyment ratings in happy music, while harmony-related features were most relevant to predicting enjoyment ratings in sad music. This work adds to our understanding of how music influences affective human experience.
更多
查看译文
关键词
affective computing,neural networks,multivariate time series modeling,music processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要