Works for compact architecture search

semanticscholar(2021)

引用 0|浏览22
暂无评分
摘要
We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS (Vinyals et al., 2015; Pham et al., 2018; Zoph & Le, 2017) and ES (Salimans et al., 2017) in a highly scalable and intuitive way. By defining the combinatorial search space of NAS to be the set of different edge-partitionings (colorings) into same-weight classes, we represent compact architectures via efficient learned edge-partitionings. For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing > 90% compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices (Choromanski et al., 2018), while still maintaining good reward. We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics (Gage, 2002) with limited storage and computational resources.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要