Learning Motion from Temporal Coincidences.

Lecture Notes in Computer Science(2017)

引用 0|浏览7
暂无评分
摘要
In this work we study unsupervised learning of correspondence relations over extended image sequences. We are specifically interested in learning the correspondence relations 'from scratch' and only consider the temporal signal of single pixels. We build on the Temporal Coincidence Analysis (TCA) approach which we apply to motion estimation. Experimental results showcase the approach for learning average motion maps and for the estimation of yaw rates in a visual odometry setting. Our approach is not meant as a direct competitor to state of the art dense motion algorithms but rather shows that valuable information for various vision tasks can be learnt by a simple statistical analysis on the pixel level. Primarily, the approach unveils principles on which biological or 'deep' learning techniques may build architectures for motion perception; so TCA formulates a hypothesis for a fundamental perception mechanism. Motion or correspondence distributions as they are determined here may associate conventional methods with a confidence measure, which allows to detect implausible, and thus probably incorrect correspondences. The approach does not need any kind of ground truth information, but rather learns over long image sequences and may thus be seen as a continuous learning method. The method is not restricted to a specific camera model and works even with strong geometric distortions. Results are presented for standard as well as fisheye cameras.
更多
查看译文
关键词
temporal coincidences,motion,learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要