Learning Community-Based Preferences via Dirichlet Process Mixtures of Gaussian Processes.

IJCAI '13: Proceedings of the Twenty-Third international joint conference on Artificial Intelligence(2013)

引用 45|浏览17
暂无评分
摘要
Bayesian approaches to preference learning using Gaussian Processes (GPs) are attractive due to their ability to explicitly model uncertainty in users' latent utility functions; unfortunately existing techniques have cubic time complexity in the number of users, which renders this approach intractable for collaborative preference learning over a large user base. Exploiting the observation that user populations often decompose into communities of shared preferences, we model user preferences as an infinite Dirichlet Process (DP) mixture of communities and learn (a) the expected number of preference communities represented in the data, (b) a GP-based preference model over items tailored to each community, and (c) the mixture weights representing each user's fraction of community membership. This results in a learning and inference process that scales linearly in the number of users rather than cubicly and additionally provides the ability to analyze individual community preferences and their associated members. We evaluate our approach on a variety of preference data sources including Amazon Mechanical Turk showing that our method is more scalable and as accurate as previous GP-based preference learning work.
更多
查看译文
关键词
GP-based preference model,collaborative preference,individual community preference,model user preference,preference community,preference data source,previous GP-based preference,shared preference,large user base,user population,community-based preference,dirichlet process mixture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要