Attention-based encoder-decoder network for depth estimation from color-coded light fields

AIP ADVANCES(2023)

引用 0|浏览0
暂无评分
摘要
Compressive light field cameras have attracted notable attention over the past few years because they can efficiently determine redundancy from light fields. However, much of the research has only concentrated on reconstructing the entire light field from compressed sampling, which ignores the possibility of directly extracting information such as depth from it. In this paper, we introduce a light field camera configuration with a random color-coded microlens array. Considering the color-coded light fields, we propose a novel attention-based encoder-decoder network. Specifically, the encoder part compresses the coded measurement into a low-dimensional representation that removes most redundancy, and the decoder part constructs the depth map directly from the latent representation. The attention mechanism enables the network to process spatial and angular features dynamically and effectively, thus significantly improving performance. Extensive experiments on synthetic and real-world datasets show that our method outperforms the state-of-the-art light field depth estimation method designed for non-coded light fields. To our knowledge, this is the first study that combines the color-coded light field with the attention-based deep learning approach, which provides a crucial insight into the design of enhanced light field photography systems.
更多
查看译文
关键词
depth estimation,encoder–decoder network,attention-based,color-coded
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要