Multiagent Learning: From Fundamentals to Foundation Models

AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems(2023)

引用 0|浏览33
暂无评分
摘要
Research in multiagent learning has come a long way over the past few decades, from learning in abstract normal-form games such as Rock-Paper-Scissors, to learning in complex worlds such as Humanoid Soccer, Capture the Flag, Gran Turismo racing, and recently board games such as Diplomacy and Stratego. In this talk I will take you on a journey that starts in the mid 90's and sheds light on algorithmic progress over the years in multiagent learning systems, uncovering game-theoretic fundamentals for reinforcement learning, adaptability, and decision-making. There have been two major research eras in the field thus far, the pre-deep multiagent learning and deep multiagent learning periods. I believe we are now on the verge of a third period, multiagent learning with foundation models. We will connect old and new ideas of the first two periods, and lay out interesting challenges ahead of us for the coming era. Specifically, we consider the ways in which the cornerstone ideas of the first two periods may inform the development of generally capable multi-agent foundation models in the future.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要