AirObject: A Temporally Evolving Graph Embedding for Object Identification

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 5|浏览59
暂无评分
摘要
Object encoding and identification are vital for robotic tasks such as autonomous exploration, semantic scene understanding, and relocalization. Previous approaches have attempted to either track objects or generate descriptors for object identification. However, such systems are limited to a “fixed” partial object representation from a single viewpoint. In a robot exploration setup, there is a requirement for a temporally “evolving” global object representation built as the robot observes the object from multiple viewpoints. Furthermore, given the vast distribution of unknown novel objects in the real world, the object identification process must be class-agnostic. In this context, we propose a novel temporal 3D object encoding approach, dubbed AirObject, to obtain global keypoint graph-based embeddings of objects. Specifically, the global 3D object embeddings are generated using a temporal convolutional network across structural information of multiple frames obtained from a graph attention-based encoding method. We demonstrate that AirObject achieves the state-of-the-art performance for video object identification and is robust to severe occlusion, perceptual aliasing, viewpoint shift, deformation, and scale transform, outperforming the state-of-the-art single-frame and sequential descriptors. To the best of our knowledge, AirObject is one of the first temporal object encoding methods. Source code is available at https://github.com/Nik-v9/AirObject.
更多
查看译文
关键词
3D from multi-view and sensors, Deep learning architectures and techniques, Machine learning, Recognition: detection,categorization,retrieval, Representation learning, Robot vision, Video analysis and understanding, Vision applications and systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要