Neural Reflectance Decomposition Under Dynamic Point Light

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY(2024)

引用 0|浏览1
暂无评分
摘要
Decomposing a scene into its 3D geometry, surface material textures, and illumination is a challenging but important problem in computer vision and graphics. While recent neural implicit representation based works have shown tremendous advantages, existing methods are not applicable to images illuminated by a single dynamic point light. We propose an entirely self-supervised end-to-end neural implicit representation based reflectance decomposition algorithm for objects under a dynamic point light. Our method adopts a staged training framework to estimate the geometry, light source position, and surface material textures through volume rendering, self-shadow inverse rendering, and physical model based surface rendering respectively. This scheme allows accurate recovery of the surface material textures which are coupled to the dynamic light, improving the reflectance decomposition capability. For evaluation, we collect a new dataset of several synthetic and real world objects illuminated by a moving point light. Experiments show that our method achieves superior reflectance decomposition performance compared to state-of-the-art methods, and the recovered elements can be deployed in existing graphics pipelines to perform relighting, material editing, and scene composition.
更多
查看译文
关键词
3D reconstruction,differentiable rendering,self-supervised
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要