Crowd prediction systems: Markets, polls, and elite forecasters

Proceedings of the 23rd ACM Conference on Economics and Computation(2024)

引用 3|浏览6
暂无评分
摘要
What systems should we use to elicit and aggregate judgmental forecasts? Who should be asked to make such forecasts? We address these questions by assessing two widely used crowd prediction systems: prediction markets and prediction polls. Our main test compares a prediction market against team-based prediction polls, using data from a large, multi-year forecasting competition. Each of these two systems uses inputs from either a large, sub-elite or a small, elite crowd. We find that small, elite crowds outperform larger ones, whereas the two systems are statistically tied. In addition to this main research question, we examine two complementary questions. First, we compare two market structures—continuous double auction (CDA) markets and logarithmic market scoring rule (LMSR) markets—and find that the LMSR market produces more accurate forecasts than the CDA market, especially on low-activity questions. Second, given the importance of elite forecasters, we compare the talent-spotting properties of the two systems and find that markets and polls are equally effective at identifying elite forecasters. Overall, the performance benefits of “superforecasting” hold across systems. Managers should move towards identifying and deploying small, select crowds to maximize forecasting performance.
更多
查看译文
关键词
Forecasting,Judgment,Crowdsourcing,Aggregation,Markets
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要