TCNCA: Temporal Convolution Network with Chunked Attention for Scalable Sequence Processing
CoRR(2023)
摘要
MEGA is a recent transformer-based architecture, which utilizes a linear
recurrent operator whose parallel computation, based on the FFT, scales as
$O(LlogL)$, with $L$ being the sequence length. We build upon their approach by
replacing the linear recurrence with a special temporal convolutional network
which permits larger receptive field size with shallower networks, and reduces
the computational complexity to $O(L)$. The resulting model is called TCNCA, a
Temporal Convolutional Network with Chunked Attention. We evaluate TCNCA on
EnWik8 language modeling, long-range-arena (LRA) sequence classification, as
well as a synthetic reasoning benchmark associative recall. On EnWik8, TCNCA
outperforms MEGA, reaching a lower loss with $1.37\times$/$1.24\times$ faster
forward/backward pass during training. The dilated convolutions used in TCNCA
are consistently and significantly faster operations than the FFT-based
parallelized recurrence in GPUs, making them a scalable candidate for handling
very large sequence lengths: they are up to $7.07\times$/$2.86\times$ faster in
the forward/backward pass for sequences up to 131k. Further on LRA, TCNCA
achieves, on average, $1.28\times$ speed-up during inference with similar
accuracy to what MEGA achieves. On associative recall, we find that even a
simplified version of TCNCA, without excessive multiplicative and additive
interactions, remains superior or competitive to MEGA on a range of sequence
lengths and vocabulary sizes.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要