DDNeRF: Depth Distribution Neural Radiance Fields

2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)(2023)

引用 0|浏览30
暂无评分
摘要
The field of implicit neural representation has made significant progress. Models such as neural radiance fields (NeRF) [12], which uses relatively small neural networks, can represent high-quality scenes and achieve state-of-the-art results for novel view synthesis. Training these types of networks, however, is still computationally expensive and the model struggles with real life 360. scenes. In this work, we propose the depth distribution neural radiance field (DDNeRF), a new method that significantly increases sampling efficiency along rays during training, while achieving superior results for a given sampling budget. DDNeRF achieves this performance by learning a more accurate representation of the density distribution along rays. More specifically, the proposed framework trains a coarse model to predict the internal distribution of the transparency of an input volume along each ray. This estimated distribution then guides the sampling procedure of the fine model. Our method allows using fewer samples during training while achieving better output quality with the same computational resources.
更多
查看译文
关键词
Algorithms: 3D computer vision,Computational photography,image and video synthesis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要