CGVC-T: Contextual Generative Video Compression with Transformers

IEEE Journal on Emerging and Selected Topics in Circuits and Systems(2024)

引用 0|浏览0
暂无评分
摘要
With the high demands for video streaming, recent years have witnessed a growing interest in utilizing deep learning for video compression. Most existing neural video compression approaches adopt the predictive residue coding framework, which is sub-optimal in removing redundancy across frames. In addition, purely minimizing the pixel-wise differences between the raw frame and the decompressed frame is ineffective in improving the perceptual quality of videos. In this paper, we propose a contextual generative video compression method with transformers (CGVC-T), which adopts generative adversarial networks (GAN) for perceptual quality enhancement and applies contextual coding to improve coding efficiency. Besides, we employ a hybrid transformer-convolution structure in the auto-encoders of the CGVC-T, which learns both global and local features within video frames to remove temporal and spatial redundancy. Furthermore, we introduce novel entropy models to estimate the probability distributions of the compressed latent representations, so that the bit rates required for transmitting the compressed video are decreased. The experiments on HEVC, UVG, and MCL-JCV datasets demonstrate that the perceptual quality of our CGVC-T in terms of FID, KID, and LPIPS scores surpasses state-of-the-art learned video codecs, the industrial video codecs x264 and x265, as well as the official reference software JM, HM, and VTM. Our CGVC-T also offers superior DISTS scores among all compared learned video codecs.
更多
查看译文
关键词
Contextual coding,entropy model,generative adversarial network,generative model,perceptual quality,transformers,video compression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要