Transformer-based rapid human pose estimation network

Computers & Graphics(2023)

引用 0|浏览1
暂无评分
摘要
Most current human pose estimation methods pursue excellent performance via large models and intensive computational requirements, resulting in slower models. These methods cannot be effectively adopted for human pose estimation in real applications due to their high memory and computational costs. To achieve a trade-off between accuracy and efficiency, we propose TRPose, a Transformer-based network for human pose estimation rapidly. TRPose consists of an early convolutional stage and a later Transformer stage seamlessly. Concretely, the convolutional stage forms a Rapid Fusion Module (RFM), which efficiently acquires multi-scale features via three parallel convolution branches. The Transformer stage utilizes multi-resolution Transformers to construct a Dual scale Encoder Module (DEM), aiming at learning long-range dependencies from different scale features of the whole human skeletal keypoints. The experiments show that TRPose acquires 74.3 AP and 73.8 AP on COCO validation and testdev datasets with 170+ FPS on a GTX2080Ti, which achieves the better efficiency and effectiveness trade-offs than most state-of-the-art methods. Our model also outperforms mainstream Transformer-based architectures on MPII dataset, yielding 89.9 PCK@0.5 score on val set without extra data.
更多
查看译文
关键词
Transformer architecture,Human pose estimation,Inference speed,Computational cost
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要