VLAS : A Vision-Language-Action Integrated System for Mobile Social Service Robots

semanticscholar(2018)

引用 0|浏览0
暂无评分
摘要
Due to the mobility and attractive appearance, recent mobile social service robots are suitable for performing human-robot interactive roles as required for domestic service robots, such as a waiter at restaurants or an elderly companion at home. However, commercialized service robots are not equipped with high-end devices and are hard to retrofit, so they are more vulnerable to performance and device extend-ability than custom robots in laboratories. We propose Vision-Language-Action integrated System (VLAS) that can alleviate these hardware weaknesses by applying advanced machine learning techniques to leverage integrated high-level information; likewise, humans also well use imperfect physical components based on multisensory perception. We setup a pseudo-home environment and a home party scenario to evaluate VLAS system quantitatively and qualitatively. The experimental results showed VLAS system robustly operated various sub-tasks in the home party scenario with a social service commercial robot.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要