SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users

ASSETS '20: The 22nd International ACM SIGACCESS Conference on Computers and Accessibility Virtual Event Greece October, 2020(2020)

引用 26|浏览322
暂无评分
摘要
Smartwatches have the potential to provide glanceable, always-available sound feedback to people who are deaf or hard of hearing. In this paper, we present a performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch+phone+cloud, and watch+cloud. While direct comparison with prior work is challenging, our results show that the best model, VGG-lite, performed similar to the state of the art for non-portable devices with an average accuracy of 81.2% (SD=5.8%) across 20 sound classes and 97.6% (SD=1.7%) across the three highest-priority sounds. For device architectures, we found that the watch+phone architecture provided the best balance between CPU, memory, network usage, and classification latency. Based on these experimental results, we built and conducted a qualitative lab evaluation of a smartwatch-based sound awareness app, called SoundWatch (Figure 1), with eight DHH participants. Qualitative findings show support for our sound awareness app but also uncover issues with misclassifications, latency, and privacy concerns. We close by offering design considerations for future wearable sound awareness technology.
更多
查看译文
关键词
Accessibility, Deaf, hard of hearing, sound awareness, smartwatch, wearable, deep learning, CNN, sound classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要