MS-LSTM: Exploring spatiotemporal multiscale representations in video prediction domain

CoRR(2023)

引用 0|浏览52
暂无评分
摘要
The drastic variation of motion in spatial and temporal dimensions makes the video prediction task extremely challenging. Existing RNN models obtain higher performance by deepening or widening the model. They obtain the multi-scale features of the video only by stacking layers, which is inefficient and brings unbearable training costs (such as memory, FLOPs, and training time). Different from them, this paper proposes a spatiotemporal multi-scale model called MS-LSTM wholly from a multi-scale perspective. On the basis of stacked layers, MS-LSTM incorporates two additional efficient multi-scale designs to fully capture spatiotemporal context information. Concretely, we employ LSTMs with mirrored pyramid structures to construct spatial multi-scale representations and LSTMs with different convolution kernels to construct temporal multi-scale representations. We theoretically analyze the training cost and performance of MS-LSTM and its components. Detailed comparison experiments with twelve baseline models on four video datasets show that MS-LSTM has better performance but lower training costs.
更多
查看译文
关键词
Video prediction,Multiple scale,LSTM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要