Text-Based Interactive Recommendation via Constraint-Augmented Reinforcement Learning

NeurIPS(2019)

引用 55|浏览295
暂无评分
摘要
Text-based interactive recommendation provides richer user preferences and has demonstrated advantages over traditional interactive recommender systems. However , recommendations can easily violate preferences of users from their past natural-language feedback, since the recommender needs to explore new items for further improvement. To alleviate this issue, we propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time. Specifically, we leverage a discriminator to detect recommendations violating user historical preference, which is incorporated into the standard RL objective of maximizing expected cumulative future rewards. Our proposed framework is general and is further extended to the task of constrained text generation. Empirical results show that the proposed method yields consistent improvement relative to standard RL methods.
更多
查看译文
关键词
user preferences
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要