Research on demonstration task segmentation method based on multi-mode information

Wei Zhang, Tieze Cao, Anbing Sun,Xiaochuan Gan, Jingjing Fan,Lina Hao,Hongtai Cheng

2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER)(2022)

引用 0|浏览1
暂无评分
摘要
Since the demonstration data contains information such as the demonstration intention of the instructor, the pose of the workpiece, and environmental constraints, it is difficult to segment teaching data accurately using with a single method. Therefore, this paper proposes a segmentation method based on multimodal information to solve this problem. The demonstration data is preliminarily segmented by a method based on gestures, trajectory variance and contact force, and then the demonstration tasks are accurately segmented into unconstrained tasks, position-constrained tasks and force-constrained tasks by fused segmentation criteria. Finally, the effectiveness of the proposed segmentation method based on multimodal information is verified by reproducing the experiments of assembling planetary gear reducers.
更多
查看译文
关键词
demonstration task segmentation method,multimode information,environmental constraints,multimodal information,trajectory variance,contact force,unconstrained tasks,position-constrained tasks,force-constrained tasks,fused segmentation criteria,teaching data segmentation,planetary gear reducer assembling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要