Towards Backward-Compatible Continual Learning of Image Compression
CVPR 2024(2024)
摘要
This paper explores the possibility of extending the capability of
pre-trained neural image compressors (e.g., adapting to new data or target
bitrates) without breaking backward compatibility, the ability to decode
bitstreams encoded by the original model. We refer to this problem as continual
learning of image compression. Our initial findings show that baseline
solutions, such as end-to-end fine-tuning, do not preserve the desired backward
compatibility. To tackle this, we propose a knowledge replay training strategy
that effectively addresses this issue. We also design a new model architecture
that enables more effective continual learning than existing baselines.
Experiments are conducted for two scenarios: data-incremental learning and
rate-incremental learning. The main conclusion of this paper is that neural
image compressors can be fine-tuned to achieve better performance (compared to
their pre-trained version) on new data and rates without compromising backward
compatibility. Our code is available at
https://gitlab.com/viper-purdue/continual-compression
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要