Joint Hand Motion and Interaction Hotspots Prediction from Egocentric Videos

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 41|浏览64
暂无评分
摘要
We propose to forecast future hand-object interactions given an egocentric video. Instead of predicting action labels or pixels, we directly predict the hand motion trajectory and the future contact points on the next active object (i.e., interaction hotspots). This relatively low-dimensional representation provides a con-crete description of future interactions. To tackle this task, we first provide an automatic way to collect trajectory and hotspots labels on large-scale data. We then use this data to train an Object-Centric Transformer (OCT) model for prediction. Our model performs hand and object interaction reasoning via the self-attention mechanism in Transformers. OCT also provides a probabilistic framework to sample the future trajectory and hotspots to handle uncertainty in prediction. We perform experi-ments on the Epic-Kitchens-55, Epic-Kitchens-100 and EGTEA Gaze+ datasets, and show that OCT significantly outperforms state-of the-art approaches by a large margin. Project page is available at https://stevenlsw.github.io/hoi-forecast.
更多
查看译文
关键词
Video analysis and understanding, Behavior analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要