Ultra-High Resolution SVBRDF Recovery from a Single Image

ACM Transactions on Graphics(2023)

引用 2|浏览29
暂无评分
摘要
Existing convolutional neural networks have achieved great success in recovering Spatially Varying Bidirectional Surface Reflectance Distribution Function (SVBRDF) maps from a single image. However, they mainly focus on handling low-resolution (e.g., 256 × 256) inputs. Ultra-High Resolution (UHR) material maps are notoriously difficult to acquire by existing networks because (1) finite computational resources set bounds for input receptive fields and output resolutions, and (2) convolutional layers operate locally and lack the ability to capture long-range structural dependencies in UHR images. We propose an implicit neural reflectance model and a divide-and-conquer solution to address these two challenges simultaneously. We first crop a UHR image into low-resolution patches, each of which are processed by a local feature extractor to extract important details. To fully exploit long-range spatial dependency and ensure global coherency, we incorporate a global feature extractor and several coordinate-aware feature assembly modules into our pipeline. The global feature extractor contains several lightweight material vision transformers that have a global receptive field at each scale and have the ability to infer long-term relationships in the material. After decoding globally coherent feature maps assembled by coordinate-aware feature assembly modules, the proposed end-to-end method is able to generate UHR SVBRDF maps from a single image with fine spatial details and consistent global structures.
更多
查看译文
关键词
SVBRDF,Ultra-High Resolution,neural networks,transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要