A framework to co-optimize task and social dialogue policies using Reinforcement Learning.

IVA(2020)

引用 6|浏览19
暂无评分
摘要
One of the main challenges for conversational agents is to select the optimal dialogue policy based on the state of the interaction. This challenge becomes even harder when the conversational agent not only has to achieve a specific task, but also aims at building rapport. Although some work already tried to tackle this challenge using a Reinforcement Learning (RL) approach, they tend to consider one single optimal policy for all the users, regardless of their conversational goals. In this work, we describe a framework that allows us to build a RL-based agent able to adapt its dialogue policy depending on its user's conversational goals. After we build a rule-based agent and a user simulator communicating at the dialog-act level, we crowdsource the surface sentences authoring for both the simulated users and the agent, which allow us to generate a dataset of interactions in natural language. Then, we annotate each of these interactions with a single rapport score and analyze the links between simulated users' conversational goals, agent conversational policies, and rapport. Our results show that rapport was higher when both or none of the interlocutors tried to build rapport. We use this result to inform the design of a social reward function, and we rely on this social reward function to train a RL-based agent using an hybrid approach of supervised learning and reinforcement learning. We evaluate our approach by comparing two different versions of our RL-based agent: one that takes users' conversational goals into account and another that does not. The results show that an agent adapting its dialogue policy depending on users' conversational goals performs better.
更多
查看译文
关键词
Conversational Agent, Reinforcement Learning, Socially-Aware, Dialogue Manager, Rapport
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要