META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI
CoRR(2022)
摘要
Task-oriented dialogue (TOD) systems have been widely used by mobile phone
intelligent assistants to accomplish tasks such as calendar scheduling or hotel
reservation. Current TOD systems usually focus on multi-turn text/speech
interaction, then they would call back-end APIs designed for TODs to perform
the task. However, this API-based architecture greatly limits the
information-searching capability of intelligent assistants and may even lead to
task failure if TOD-specific APIs are not available or the task is too
complicated to be executed by the provided APIs. In this paper, we propose a
new TOD architecture: GUI-based task-oriented dialogue system (GUI-TOD). A
GUI-TOD system can directly perform GUI operations on real APPs and execute
tasks without invoking TOD-specific backend APIs. Furthermore, we release
META-GUI, a dataset for training a Multi-modal convErsaTional Agent on mobile
GUI. We also propose a multi-model action prediction and response model, which
show promising results on META-GUI. The dataset, codes and leaderboard are
publicly available.
更多查看译文
关键词
meta-gui,multi-modal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要