Diffusion-based Light Field Synthesis
CoRR(2024)
摘要
Light fields (LFs), conducive to comprehensive scene radiance recorded across
angular dimensions, find wide applications in 3D reconstruction, virtual
reality, and computational photography.However, the LF acquisition is
inevitably time-consuming and resource-intensive due to the mainstream
acquisition strategy involving manual capture or laborious software
synthesis.Given such a challenge, we introduce LFdiff, a straightforward yet
effective diffusion-based generative framework tailored for LF synthesis, which
adopts only a single RGB image as input.LFdiff leverages disparity estimated by
a monocular depth estimation network and incorporates two distinctive
components: a novel condition scheme and a noise estimation network tailored
for LF data.Specifically, we design a position-aware warping condition scheme,
enhancing inter-view geometry learning via a robust conditional signal.We then
propose DistgUnet, a disentanglement-based noise estimation network, to harness
comprehensive LF representations.Extensive experiments demonstrate that LFdiff
excels in synthesizing visually pleasing and disparity-controllable light
fields with enhanced generalization capability.Additionally, comprehensive
results affirm the broad applicability of the generated LF data, spanning
applications like LF super-resolution and refocusing.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要