Fingerspelling PoseNet: Enhancing Fingerspelling Translation with Pose-Based Transformer Models

2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)(2023)

引用 0|浏览2
暂无评分
摘要
We address the task of American Sign Language fingerspelling translation using videos in the wild. We exploit advances in more accurate hand pose estimation and propose a novel architecture that leverages the transformer based encoder-decoder model enabling seamless contextual word translation. The translation model is augmented by a novel loss term that accurately predicts the length of the finger-spelled word, benefiting both training and inference. We also propose a novel two-stage inference approach that re-ranks the hypotheses using the language model capabilities of the decoder. Through extensive experiments, we demonstrate that our proposed method outperforms the state-of-the-art models on ChicagoFSWild and ChicagoFSWild+ achieving more than 10% relative improvement in performance. Our findings highlight the effectiveness of our approach and its potential to advance fingerspelling recognition in sign language translation. Code is also available at https://github.com/pooyafayyaz/Fingerspelling-PoseNet.
更多
查看译文
关键词
Transformer Model,Language Model,Language Translation,Word Length,Pose Estimation,Sign Language,Sign Language Translation,American Sign Language,Loss Function,Model Performance,Convolutional Neural Network,Decoding,Contextual Information,Recurrent Neural Network,Video Frames,Optical Flow,Vector Of Size,Sine And Cosine,Decoding Process,Hand Joints,Beam Search,Individual Letters,Sign Language Recognition,Pose Estimation Methods,Self-attention Module,Attention Heads,Sequence Of Tokens,Inference Stage,Video Collection,Attention Mechanism
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要