Accelerating Learnt Video Codecs with Gradient Decay and Layer-wise Distillation
CoRR(2023)
摘要
In recent years, end-to-end learnt video codecs have demonstrated their
potential to compete with conventional coding algorithms in term of compression
efficiency. However, most learning-based video compression models are
associated with high computational complexity and latency, in particular at the
decoder side, which limits their deployment in practical applications. In this
paper, we present a novel model-agnostic pruning scheme based on gradient decay
and adaptive layer-wise distillation. Gradient decay enhances parameter
exploration during sparsification whilst preventing runaway sparsity and is
superior to the standard Straight-Through Estimation. The adaptive layer-wise
distillation regulates the sparse training in various stages based on the
distortion of intermediate features. This stage-wise design efficiently updates
parameters with minimal computational overhead. The proposed approach has been
applied to three popular end-to-end learnt video codecs, FVC, DCVC, and
DCVC-HEM. Results confirm that our method yields up to 65
and 2x speed-up with less than 0.3dB drop in BD-PSNR. Supporting code and
supplementary material can be downloaded from:
https://jasminepp.github.io/lightweightdvc/
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要