On-line policy optimisation of Bayesian spoken dialogue systems via human interaction

Acoustics, Speech and Signal Processing(2013)

引用 96|浏览35
暂无评分
摘要
A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy.
更多
查看译文
关键词
Gaussian processes,Markov processes,belief networks,decision theory,human computer interaction,interactive systems,learning (artificial intelligence),optimisation,speech recognition,Bayesian spoken dialogue systems,Bayesian update,Gaussian processes,RL algorithms,automatic policy optimisation,convergence problems,dialogue state system,dynamic Bayesian network-based system,human interaction,improved policy model,online policy optimisation,optimisation space,partially observable Markov decision process,reinforcement learning,reward function,speech recognition error robustness,very low dimensional spaces,Gaussian process,POMDP,dialogue systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要