Toward hardware-aware deep-learning-based dialogue systems

Neural Computing and Applications(2021)

引用 6|浏览42
暂无评分
摘要
In the past few years, the use of transformer-based models has experienced increasing popularity as new state-of-the-art performance was achieved in several natural language processing tasks. As these models are often extremely large, however, their use for applications within embedded devices may not be feasible. In this work, we look at one such specific application, retrieval-based dialogue systems, that poses additional difficulties when deployed in environments characterized by limited resources. Research on building dialogue systems able to engage in natural sounding conversation with humans has attracted increasing attention in recent years. This has led to the rise of commercial conversational agents, such as Google Home, Alexa and Siri situated on embedded devices, that enable users to interface with a wide range of underlying functionalities in a natural and seamless manner. In part due to memory and computational power constraints, these agents necessitate frequent communication with a server in order to process the users’ queries. This communication may act as a bottleneck, resulting in delays as well as in the halt of the system should the network connection be lost or unavailable. We propose a new framework for hardware-aware retrieval-based dialogue systems based on the Dual-Encoder architecture, coupled with a clustering method to group candidates pertaining to a same conversation, that reduces storage capacity and computational power requirements.
更多
查看译文
关键词
Dialogue systems, Natural language processing, Artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要