Towards Decentralized Social Reinforcement Learning via Ego-Network Extrapolation.

AAMAS(2021)

引用 0|浏览17
暂无评分
摘要
In this work, we consider the problem of multi-agent reinforcement learning in directed social networks with a large number of agents. Network dependencies among user activities impact the reward for individual actions and need to be incorporated into policy learning, however, directed interactions entail that the network is partially observable to each user. When estimating policies locally, the insufficient state information makes it challenging for users to effectively learn network dependencies. To address this, we use parameter sharing and ego-network extrapolation in a decentralized policy learning and execution framework. This is in contrast to previous work on social RL that assumes a centralized controller to capture inter-agent dependencies for joint policy learning. We evaluate our proposed approach on Twitter datasets and show that our decentralized learning approach achieves performance nearly equivalent to that of centralized learning approach and superior performance to other baselines.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要