所有文章 > 正文

IJCAI 2020丨近期必读七篇【深度强化学习】论文

作者: AMiner科技

时间: 2020-09-28 18:04

IJCAI 2020 七篇必读的深度强化学习(Deep Reinforcement Learning)相关论文

导语:
国际人工智能联合会议(International Joint Conference on Artificial Intelligence, 简称为 IJCAI)是人工智能领域中最主要的学术会议之一,原为单数年召开,自2016年起改为每年召开。因疫情的影响, IJCAI 2020将于2021年1月5日-10日在举行。


根据AMiner-IJCAI 2020词云图,小脉发现表征学习、图神经网络、深度强化学习、深度神经网络等都是今年比较火的Topic,受到了很多人的关注。今天小脉给大家分享的是IJCAI 2020七篇必读的深度强化学习(Deep Reinforcement Learning)相关论文。


1. 论文名称:Efficient Deep Reinforcement Learning via Adaptive Policy Transfer

论文链接:https://www.aminer.cn/pub/5ef96b048806af6ef2772111?conf=ijcai2020

作者:Tianpei Yang、Jianye Hao、Zhaopeng Meng、Zongzhang Zhang、Yujing Hu、Yingfeng Chen、Changjie Fan、Weixun Wang、Wulong Liu、Zhaodong Wang、Jiajie Peng

简介:· The authors propose a Policy Transfer Framework (PTF) which can efficiently select the optimal source policy and exploit the useful information to facilitate the target task learning.
· PTF efficiently avoids negative transfer through terminating the exploitation of current source policy and selects another one adaptively.
· PTF can be combined with existing deep DRL methods.
· Experimental results show PTF efficiently accelerates the learning process of existing state-ofthe-art DRL methods and outperforms previous policy reuse approaches.

2. 论文名称:KoGuN: Accelerating Deep Reinforcement Learning via Integrating Human Suboptimal Knowledge

论文链接https://www.aminer.cn/pub/5e4d083f3a55ac8cfd770c23?conf=ijcai2020

作者:Zhang Peng、Jianye Hao、Wang Weixun、Tang Hongyao、Ma Yi、Duan Yihai、Zheng Yan

简介:· The authors propose a novel policy network framework called KoGuN to leverage human knowledge to accelerate the learning process of RL agents.
· The authors firstly evaluate the algorithm on four tasks in Section 4.1 : CartP ole [Barto and Sutton, 1982], LunarLander and LunarLanderContinuous in OpenAI Gym [Brockman et al, 2016] and F lappyBird in PLE [Tasfi, 2016].
· The authors show the effectiveness and robustness of KoGuN in sparse reward setting in Section 4.2.
· For PPO without KoGuN, the authors use a neural network with two full-connected hidden layers as policy approximator.
· For KoGuN with normal network (KoGuN-concat) as refine module, the authors use a neural network with two full-connected hidden layers for the refine module.
· For KoGuN with hypernetworks (KoGuN-hyper), the authors use hypernetworks to generate a refine module with one hidden layer.
· All hidden layers described above have 32 units. w1 is set to 0.7 at beginning and decays to 0.1 in the end of training phase

3. 论文名称:Generating Behavior-Diverse Game AIs with Evolutionary Multi-Objective Deep Reinforcement Learning

论文链接:https://www.aminer.cn/pub/5ef96b048806af6ef277219?conf=ijcai2020

作者:Ruimin Shen、Yan Zheng、Jianye Hao、Zhaopeng Meng、Yingfeng Chen、Changjie Fan、Yang Liu

简介:· This paper proposes EMOGI, aiming to efficiently generate behavior-diverse Game AIs by leveraging EA, PMOO and DRL.
· Empirical results show the effectiveness of EMOGI in creating diverse and complex behaviors.
· To deploy AIs in commercial games, the robustness of the generated AIs is worth investigating as future work [Sun et al, 2020]

4. 论文名称:Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning

论文链接:https://www.aminer.cn/pub/5eda19d991e01187f5d6db49?conf=ijcai2020

作者:Feng Dieqiao、Gomes Carla P.、Selman Bart

简介:· The authors presented a framework based on deep RL for solving hard combinatorial planning problems in the domain of Sokoban.
· The authors showed the effectiveness of the learning based planning strategy by solving hard Sokoban instances that are out of reach of previous search-based solution techniques, including methods specialized for Sokoban.
· Since Sokoban is one of the hardest challenge domains for current AI planners, this work shows the potential of curriculumbased deep RL for solving hard AI planning tasks.

5. 论文名称I4R: Promoting Deep Reinforcement Learning by the Indicator for Expressive Representations

论文链接https://www.aminer.cn/pub/5ef96b048806af6ef2772128?conf=ijcai2020

作者:Xufang Luo、Qi Meng、Di He、Wei Chen、Yunhong Wang

简介:· The authors mainly study the relationship between representations and performance of the DRL agents.
· The authors define the NSSV indicator, i.e, the smallest number of significant singular values, as a measurement for learning representations, the authors verify the positive correlation between NSSV and the rewards, and further propose a novel method called I4R, to improve DRL algorthims via adding the corresponding regularization term to enhance NSSV.
· The authors show the proposed method I4R based on exploratory experiments, including 3 parts, i.e., observations, the proposed indicator NSSV, and the novel algorithm I4R.

6. 论文名称:Rebalancing Expanding EV Sharing Systems with Deep Reinforcement Learning

论文链接:https://www.aminer.cn/pub/5ef96b048806af6ef2772092?conf=ijcai2020

作者:Man Luo、Wenzhe Zhang、Tianyou Song、Kun Li、Hongming Zhu、Bowen Du 、Hongkai Wen

简介:· The authors study the incentive-based rebalancing for continuous expanding EV sharing systems.
· The authors design a simulator to simulate the operation of EV sharing systems, which is calibrated with real data from an actual EV sharing system for a year.
· Extensive experiments have shown that the proposed approach significantly outperforms the baselines and state-of-the-art in both satisfied demand rate and net revenue, and is robust to different levels of system expansion dynamics.
· The authors show that the proposed approach performs consistently with different charging time and EV range.

7. 论文名称Independent Skill Transfer for Deep Reinforcement Learning

论文链接https://www.aminer.cn/pub/5ef96b048806af6ef2772129?conf=ijcai2020

作者:Qiangxing Tian、Guanchu Wang、Jinxin Liu、Donglin Wang、Yachen Kang

简介:· Deep reinforcement learning (DRL) has wide applications in various challenging fields, such as real-world visual navigation [Zhu et al, 2017], playing games [Silver et al, 2016] and robotic controls [Schulman et al, 2015]
· In this work , the authors propose to learn independent skills for efficient skill transfer, where the learned primitive skills with strong correlation are decomposed into independent skills
· We take the eigenvalues in Figure 1 as an example: for the case of 6 primitive skills, |Z| = 3 is reasonable since more than 98% component of primitive actions can be represented by three independent components
· Effective observation collection and independent skills guarantee the success of low-dimension skill transfer

查看更多IJCAI 2020精彩论文,请移步:https://www.aminer.cn/conf/ijcai2020/papers 添加“小脉”微信,留言“IJCAI 2020”,即可加入【IJCAI 2020会议交流群】,与IJCAI 2020论文作者面对面沟通!

二维码 扫码微信阅读
推荐阅读 更多