Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation
arxiv(2024)
摘要
Estimating the pose of objects through vision is essential to make robotic
platforms interact with the environment. Yet, it presents many challenges,
often related to the lack of flexibility and generalizability of
state-of-the-art solutions. Diffusion models are a cutting-edge neural
architecture transforming 2D and 3D computer vision, outlining remarkable
performances in zero-shot novel-view synthesis. Such a use case is particularly
intriguing for reconstructing 3D objects. However, localizing objects in
unstructured environments is rather unexplored. To this end, this work presents
Zero123-6D to demonstrate the utility of Diffusion Model-based
novel-view-synthesizers in enhancing RGB 6D pose estimation at category-level
by integrating them with feature extraction techniques. The outlined method
exploits such a novel view synthesizer to expand a sparse set of RGB-only
reference views for the zero-shot 6D pose estimation task. Experiments are
quantitatively analyzed on the CO3D dataset, showcasing increased performance
over baselines, a substantial reduction in data requirements, and the removal
of the necessity of depth information.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要