Transformer-based network with temporal depthwise convolutions for sEMG recognition

PATTERN RECOGNITION(2024)

引用 0|浏览44
暂无评分
摘要
Considerable progress has been made in pattern recognition of surface electromyography (sEMG) with deep learning, bringing improvements to sEMG-based gesture classification. Current deep learning techniques are mainly based on convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their hybrids. However, CNNs focus on spatial and local information, while RNNs are unparallelizable, and they suffer from gradient vanishing/exploding. Their hybrids often face problems of model complexity and high computational cost. Because sEMG signals have a sequential nature, motivated by the sequence modeling network Transformer and its self-attention mechanism, we propose a Transformer-based network, temporal depthwise convolutional Transformer (TDCT), for sparse sEMG recognition. With this network, higher recognition accuracy is achieved with fewer convolution parameters and a lower computational cost. Specifically, this network has parallel capability and can capture long-range features inside sEMG signals. We improve the locality and channel correlation capture of multi-head self-attention (MSA) for sEMG modeling by replacing the linear transforma-tion with the proposed temporal depthwise convolution (TDC), which can reduce the convolution parameters and computations for feature learning performance. Four sEMG datasets, Ninapro DB1, DB2, DB5, and OYDB, are used for evaluations and comparisons. In the results, our model outperforms other methods, including Transformer-based networks, in most windows at recognizing the raw signals of sparse sEMG, thus achieving state-of-the-art classification accuracy.
更多
查看译文
关键词
Surface electromyography,Feature learning,Gesture recognition,Transformer,Self-attention,Temporal depthwise convolution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要