Implicit Neural Representations for Image Compression.

European Conference on Computer Vision(2022)

引用 39|浏览45
暂无评分
摘要
Implicit Neural Representations (INRs) gained attention as a novel and effective representation for various data types. Recently, prior work applied INRs to image compressing. Such compression algorithms are promising candidates as a general purpose approach for any coordinate-based data modality. However, in order to live up to this promise current INR-based compression algorithms need to improve their rate-distortion performance by a large margin. This work progresses on this problem. First, we propose meta-learned initializations for INR-based compression which improves rate-distortion performance. As a side effect it also leads to leads to faster convergence speed. Secondly, we introduce a simple yet highly effective change to the network architecture compared to prior work on INR-based compression. Namely, we combine SIREN networks with positional encodings which improves rate distortion performance. Our contributions to source compression with INRs vastly outperform prior work. We show that our INR-based compression algorithm, meta-learning combined with SIREN and positional encodings, outperforms JPEG2000 and Rate-Distortion Autoencoders on Kodak with 2x reduced dimensionality for the first time and closes the gap on full resolution images. To underline the generality of INR-based source compression, we further perform experiments on 3D shape compression where our method greatly outperforms Draco - a traditional compression algorithm.
更多
查看译文
关键词
image compression,representations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要