Gradient Forward-Propagation for Large-Scale Temporal Video Modelling

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021(2021)

引用 5|浏览59
暂无评分
摘要
How can neural networks be trained on large-volume temporal data efficiently? To compute the gradients required to update parameters, backpropagation blocks computations until the forward and backward passes are completed. For temporal signals, this introduces high latency and hinders real-time learning. It also creates a coupling between consecutive layers, which limits model parallelism and increases memory consumption. In this paper, we build upon Sideways, which avoids blocking by propagating approximate gradients forward in time, and we propose mechanisms for temporal integration of information based on different variants of skip connections. We also show how to decouple computation and delegate individual neural modules to different devices, allowing distributed and parallel training. The proposed Skip-Sideways achieves low latency training, model parallelism, and, importantly, is capable of extracting temporal features, leading to more stable training and improved performance on real-world action recognition video datasets such as HMDB51, UCF101, and the large-scale Kinetics-600. Finally, we also show that models trained with Skip-Sideways generate better future frames than Sideways models, and hence they can better utilize motion cues.
更多
查看译文
关键词
large-volume temporal data,backpropagation blocks computations,temporal signals,real-time learning,consecutive layers,model parallelism,increases memory consumption,approximate gradients,skip connections,decouple computation,individual neural modules,distributed training,parallel training,Skip-Sideways,temporal features,stable training,real-world action recognition video datasets,large-scale Kinetics-600,Sideways models,gradient forward-propagation,large-scale temporal video modelling,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要