General Neural Gauge Fields

ICLR 2023(2023)

Cited 11|Views110
No score
Abstract
The recent advance of neural radiance fields has significantly pushed the boundary of scene representation learning. Aiming to boost the computational efficiency and rendering quality of 3D scenes, a popular line of research maps the 3D coordinate system to another measuring system, e.g., 2D manifolds and hash tables, for modeling radiance fields. The conversion of measuring systems can be typically dubbed as \emph{gauge transformation}, which is usually some pre-defined mapping functions, e.g., orthogonal projections and spatial hash functions. This begs a question: can we learn the gauge transformation along with the neural scene representations in an end-to-end manner? In this work, we extend this problem to a general paradigm with a taxonomy of discrete and continuous cases, and develop an end-to-end training framework to jointly optimize the gauge transformation and radiance fields. As observing the gauge transformation learning easily suffers from learning collapse, we derive a general regularization mechanism from the principle of information conversation during the gauge transformation. On the strength of the unified neural gauge fields framework, we naturally launch a new type of gauge transformation which suffices to achieve a tradeoff between learning collapse and computation cost.
More
Translated text
Key words
NeRF,Neural Scene Representation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined