Multi-view Gait Video Synthesis

Proceedings of the 30th ACM International Conference on Multimedia(2022)

引用 1|浏览25
暂无评分
摘要
ABSTRACTThis paper investigates a new fine-grained video generation task, namely Multi-view Gait Video Synthesis, where the generation model works on a video of a walking human of arbitrary viewpoint and creates multi-view renderings of the subject. This task is particularly challenging, as it requires synthesizing visually plausible results, while simultaneously preserving discriminative gait cues subject to identification. To tackle the challenge caused by the entanglement of viewpoint, texture, and body structure, we present a network with two collaborative branches to decouple the novel view rendering process into two streams for human appearances (texture) and silhouettes (structure), respectively. Additionally, the prior knowledge of person re-identification and gait recognition is incorporated into the training loss for more adequate and accurate dynamic details. Experimental results show that the presented method is able to achieve promising success rates when attacking state-of-the-art gait recognition models. Furthermore, the method can improve gait recognition systems by effective data augmentation. To the best of our knowledge, this is the first task to manipulate views for human videos with person-specific behavioral constraints.
更多
查看译文
关键词
video,synthesis,multi-view
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要