Multi-agent Online Learning with Asynchronous Feedback Loss

neural information processing systems(2018)

引用 23|浏览109
暂无评分
摘要
We consider a game-theoretical multi-agent learning problem where the feedback information can be lost and rewards are given by a broad class of games known as variationally stable games. We propose a simple variant of the online gradient descent algorithm, called reweighted online gradient descent (ROGD) and show that in variationally stable games, if each agent adopts reweighted online gradient descent learning dynamics, then almost sure convergence to the set of Nash equilibria is guaranteed, even when the feedback loss is asynchronous and arbitrarily corrrelated among agents. We then extend the framework to deal with unknown feedback loss probabilities by using an estimator (constructed from past data) in its replacement. Finally, we further extend the framework to accommodate both asynchronous loss and stochastic rewards and establish that multi-agent ROGD learning still converges to the set of Nash equilibria in such settings. Together, we make meaningful progress towards the broad open problem of convergence of no-regret algorithms to Nash in general continuous games and contribute to the broad landscape of multi-agent online learning under imperfect information.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要