Meta Weight Learning via Model-Agnostic Meta-Learning

Neurocomputing(2020)

引用 19|浏览32
暂无评分
摘要
Abstract While meta learning approaches have achieved remarkable success, obtaining a stable and unbiased meta-learner remains a significant challenge, since the initial model of a meta-learner could be too biased towards existing tasks to adapt to new tasks. In order to avoid a biased meta-learner and improve its generalizability, this paper proposes a generic meta learning method that aims to learn an unbiased meta-learner towards a variety of tasks before its initial model is adapted to unseen tasks. Specifically, this paper presents a meta weight learning method for minimizing the inequality of performance across different training tasks. An end-to-end training approach is introduced for the proposed algorithm that allows for effectively learning weight and initializing the network model. Alternatively, a variety of measurement methods of weight is also designed to test the effectiveness of different weight learning methods on the improvement of model-agnostic meta-learning algorithm. The simulation results show that the proposed meta weight learning method not only outperforms state-of-the-art meta learning algorithms, but also is superior to other manually designed measurement methods of weight on discrete and continuous control problems. Index Terms—Meta learning, deep reinforcement learning, gradient update, weight, meta learner.
更多
查看译文
关键词
Meta learning,Deep reinforcement learning,Gradient update,Weight,Meta learner
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要