Differentiable Volumetric Rendering: Learning Implicit 3d Representations Without 3d Supervision

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)(2020)

引用 856|浏览313
暂无评分
摘要
Learning-based 3D reconstruction methods have shown impressive results. However; most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with fill 3D supervision. Moreover; we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
更多
查看译文
关键词
texture representations,RGB images,single-view reconstructions,multiview 3D reconstruction,differentiable volumetric rendering,3D supervision,3D reconstruction,differentiable rendering formulation,implicit 3D representation learning,implicit shape representation,watertight meshes,mesh-based representation,voxel-based representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要