Learning Interpretable Models Expressed in Linear Temporal Logic.

ICAPS(2019)

引用 76|浏览48
暂无评分
摘要
We examine the problem of learning models that characterize the high-level behavior of a system based on observation traces. Our aim is to develop models that are human interpretable. To this end, we introduce the problem of learning a Linear Temporal Logic (LTL) formula that parsimoniously captures a given set of positive and negative example traces. Our approach to learning LTL exploits a symbolic state representation, searching through a space of labeled skeleton formulae to construct an alternating automaton that models observed behavior, from which the LTL can be read off. Construction of interpretable behavior models is central to a diversity of applications related to planning and plan recognition. We showcase the relevance and significance of our work in the context of behavior description and discrimination: i) active learning of a human-interpretable behavior model that describes observed examples obtained by interaction with an oracle; ii) passive learning of a classifier that discriminates individual agents, based on the human-interpretable signature way in which they perform particular tasks. Experiments demonstrate the effectiveness of our symbolic model learning approach in providing human-interpretable models and classifiers from reduced example sets.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要