Intrinsically motivated reinforcement learning: A promising framework for procedural content generation.

IEEE Conference on Computational Intelligence and Games(2016)

引用 6|浏览4
暂无评分
摘要
So far, Evolutionary Algorithms (EA) have been the dominant paradigm for Procedural Content Generation (PCG). While we believe the field has achieved a remarkable success, we claim that there is a wide window for improvement. The field of machine learning has an abundance of methods that promise solutions to some aspects of PCG that are still under-researched. In this paper, we advocate the use of Intrinsically motivated reinforcement learning for content generation. A class of methods that thrive for knowledge for its own sake rather than as a step towards finding a solution. We argue that this approach promises solutions to some of the well-known problems in PCG: (1) searching for novelty and diversity can be easily incorporated as an intrinsic reward, (2) improving models of player experience and generation of adapted content can be done simultaneously through combining extrinsic and intrinsic rewards, and (3) mix-initiative design tools can incorporate more knowledge about the designer and her preferences and ultimately provide better assistance. We demonstrate our arguments and discuss the challenges that face the proposed approach.
更多
查看译文
关键词
intrinsically motivated reinforcement learning,evolutionary algorithms,EA,procedural content generation,PCG,machine learning,intrinsic rewards,mix initiative design tools,computer game doamain
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要