Provably Safe Model-Based Meta Reinforcement Learning - An Abstraction-Based Approach.

CDC(2021)

引用 1|浏览22
暂无评分
摘要
While conventional reinforcement learning focuses on designing agents that can perform one task, meta-learning aims, instead, to solve the problem of designing agents that can generalize to different tasks (e.g., environments, obstacles, and goals) that were not considered during the design or the training of these agents. In this spirit, in this paper, we consider the problem of training a provably safe Neural Network (NN) controller for uncertain nonlinear dynamical systems that can generalize to new tasks that were not present in the training data while preserving strong safety guarantees. Our approach is to learn a set of NN controllers during the training phase. When the task becomes available at runtime, our framework will carefully select a subset of these NN controllers and compose them to form the final NN controller. Critical to our approach is the ability to compute a finite-state abstraction of the nonlinear dynamical system. This abstract model captures the behavior of the closed-loop system under all possible NN weights, and is used to train the NNs and compose them when the task becomes available. We provide theoretical guarantees that govern the correctness of the resulting NN. We evaluated our approach on the problem of controlling a wheeled robot in cluttered environments that were not present in the training data.
更多
查看译文
关键词
abstraction-based approach,uncertain nonlinear dynamical systems,training data,safety guarantees,training phase,NN controller,finite-state abstraction,nonlinear dynamical system,abstract model captures,closed-loop system,NN weights,resulting NN controller,provably safe model-based meta reinforcement learning,provably safe neural network controller
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要