PaLmTac: A Vision-based Tactile Sensor Leveraging Distributed-Modality Design and Modal-matching Recognition for Soft Hand Perception

IEEE Journal of Selected Topics in Signal Processing(2024)

引用 0|浏览2
暂无评分
摘要
This paper proposes a vision-based tactile sensor (VBTS) embedded into the soft hand palm, named PaLmTac. We adopt a distributed modality design instead of overlaying function layers. On the one hand, the problem of unrelated modality integration (texture and temperature) is solved. On the other hand, combining regional recognition can avoid mixed unrelated information. Herein, a Level-Regional Feature Extraction Network (LRFE-Net) is presented to match the modality design. We leverage feature mapping, regional convolution, and regional vectorization to construct the regional recognition mechanism, which can extract features in parallel and control fusion degrees. The level recognition mechanism balances the learning difficulty of each modality. Compared with the existing VBTSs, the PaLmTac optimizes unrelated modality integration and reduces fusion interference. This paper provides a novel idea of multimodal VBTS design and sensing mechanism, which is expected to be applied to human-computer interaction scenarios based on multimodal fusion.
更多
查看译文
关键词
Vision-based tactile sensor,Soft hand,Regional recognition mechanism,Level recognition mechanism
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要