Social Learning and the Innkeeper's Challenge

Proceedings of the 2019 ACM Conference on Economics and Computation(2019)

引用 3|浏览92
暂无评分
摘要
Technological evolution, so central to the progress of humanity in recent decades, is the process of constantly introducing new technologies to replace old ones. A new technology does not necessarily mean a better technology and so should not always be embraced. How can society learn which novelties present actual improvements over the existing technology? Whereas the quality of status-quo technology is well known, the new one is a pig in a poke. With sufficiently many individuals willing to explore the new technology society can learn whether it is indeed an improvement. However, self motivated agents, often, do not agree to explore. This is true, in particular, if agents observed some predecessors that were disappointed from the new technology. Inspired by the classical multi-armed bandit model we study a setting where agents arrive sequentially and must pull one of two arms in order to receive a reward - a risky arm (representing the new technology) and a safe arm (representing the existing one). A central planner must induce sufficiently many agents to experiment with the risky arm. The central planner observes the actions and rewards of all agents while the agents themselves have partial observation. For the setting where each agent observes his predecessor we provide the central planner with a recommendation algorithm that is (almost) incentive compatible and facilitates social learning.
更多
查看译文
关键词
recommendation systems, social learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要