Adaptive Auto-Cinematography in Open Worlds

2023 IEEE 6th International Conference on Multimedia Information Processing and Retrieval (MIPR)(2023)

引用 0|浏览8
暂无评分
摘要
The increasing demand among players for greater freedom within virtual environments has resulted in a need for automated cinematography in open-world video games. Contemporary wearable devices aid in addressing these requirements by interpreting players' movements and translating them into ingame interactive actions. This increased freedom introduces significant complexities to automatic cinematography computation. In this paper, we introduce a novel Generative Adversarial Network (GAN) based model, AACOGAN, to tackle this challenge effectively. AACOGAN model establishes a relationship between player interactions, object locations, and camera movements, subsequently generating camera shots that augment player immersion. Experimental results demonstrate that AACOGAN can enhance the correlation between player interactions and camera trajectories by an average of 73%, improve multi-focus scene quality up to 32.9%. Consequently, AACOGAN is established as an efficient and economical solution for generating camera shots appropriate for a wide range of interactive motions in open-world settings. An exemplary video footage can be found at https://youtu.be/Syrwbnpzgx8.
更多
查看译文
关键词
automatic cinematography,GAN,deep-learning,efficient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要