Camera Motion Generation Method Based on Performer's Position for Performance Filming

2023 IEEE 12th Global Conference on Consumer Electronics (GCCE)(2023)

引用 0|浏览3
暂无评分
摘要
The role of camera techniques in video quality, a crucial component of visual expression, cannot be overstated. Unlike film or drama productions, live performances offer limited opportunities for reshoots, underscoring the need for meticulous preplanning of camera movements. The proposed method leverages deep neural networks to learn and replicate the camera's positions and postures in response to performers' placements and orientations on stage, thereby mimicking the tacit knowledge of professional camerawork. The method unfolds in two phases: Initially, a network is used to determine camera placements and postures based on the performers' positions and orientations as indicated in the stage script. Subsequently, a second network generates the live camera movements during the performance, factoring in both the performers' placements and orientations and the preliminary camera placements and postures determined by the first network. The architecture of the network incorporates a Transformer that infuses a relative position representation into the input data, proving its ability to more accurately learn camera motion features compared to the standard Transformer.
更多
查看译文
关键词
Automatic camera motion generation,live stage performance,deep learning,transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要