Discovering the Physical Parts of an Articulated Object Class from Multiple Videos

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2016)

引用 11|浏览77
暂无评分
摘要
We propose a motion-based method to discover the physical parts of an articulated object class (e.g. head/torso/leg of a horse) from multiple videos. The key is to find object regions that exhibit consistent motion relative to the rest of the object, across multiple videos. We can then learn a location model for the parts and segment them accurately in the individual videos using an energy function that also enforces temporal and spatial consistency in part motion. Unlike our approach, traditional methods for motion segmentation or non-rigid structure from motion operate on one video at a time. Hence they cannot discover a part unless it displays independent motion in that particular video. We evaluate our method on a new dataset of 32 videos of tigers and horses, where we significantly outperform a recent motion segmentation method on the task of part discovery (obtaining roughly twice the accuracy).
更多
查看译文
关键词
articulated object class physical parts,energy function,spatial consistency,temporal consistency,nonrigid structure from motion,motion segmentation method
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要