Verbally Assisting Virtual-Environment Tactile Maps: A cross-linguistic and cross-cultural study

Advances in Intelligent Systems and Computing(2014)

引用 3|浏览13
暂无评分
摘要
The Verbally Assisting Virtual-Environment Tactile Maps (VAVETaM) approach proposes to increase the effectiveness of tactile maps by realizing an intelligent multi-modal tactile map system that generates assisting utterances that generates assisting utterances acquiring survey knowledge from virtual tactile maps. Two experiments in German conducted with blindfolded sighted people and with blind and visually impaired people show that both types of participants benefit from verbal assistance. In this paper, we report an experiment testing the adaptation of the German prototype to be useable by Chinese native speakers. This study shows that the VAVETaM system can be adapted to Chinese language with comparable small effort. The Chinese participants' achievement in acquiring survey knowledge is comparable to those of the participants in the German study. This supports the view that human processing of representationally multi-modal information is comparable between different cultures and languages. ? Springer-Verlag Berlin Heidelberg 2014.
更多
查看译文
关键词
audio-tactile map,multi-modal interface,spatial knowledge acquisition,verbal assisting utterances,virtual haptics,virtual tactile map
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要