Deep thoughts on deep image compression

SIGGRAPH '18: Special Interest Group on Computer Graphics and Interactive Techniques Conference Vancouver British Columbia Canada August, 2018(2018)

引用 0|浏览14
暂无评分
摘要
Deep image compositing has, in the last decade, become an industry standard approach to combining multiple computer-generated elements into a final frame. With rich support for multiple depth-specified samples per-pixel, deep images overcome many of the challenges previously faced when trying to combine multiple images using simple alpha channels and/or depth values. A practical challenge when using deep images, however, is managing the data footprint. The visual fidelity of the computer generated environments, characters and effects is continually growing, typically resulting in both a higher number of elements and greater complexity within each element. It is not uncommon to be using "gigabytes" to describe the size of deep image collections which, as more and more visual effects facilities establish a global presence, introduces a significant concern about timely overseas data transfer. Further, as deep images flow through compositing networks, the high sample count contributes to longer processing times. Our observation is that, with a richer contextual understanding of the target composite, systems - both automatic and artist-controlled - can be built to significantly compress deep images such that there is no perceptual difference in the final result.
更多
查看译文
关键词
deep images,compositing,compression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要