DeepVehicleSense: An Energy-efficient Transportation Mode Recognition Leveraging Staged Deep Learning over Sound Samples

IEEE Transactions on Mobile Computing(2022)

引用 2|浏览11
暂无评分
摘要
In this paper, we present a new transportation mode recognition system for smartphones called DeepVehicleSense, which is widely applicable to mobile context-aware services. DeepVehicleSense aims at achieving three performance objectives: high accuracy, low latency, and low power consumption at once by exploiting sound characteristics captured from the built-in microphone while being on candidate transportations. To attain high energy efficiency, DeepVehicleSense adopts hierarchical accelerometer-based triggers that minimize the activation of the microphone of smartphones. Further, to achieve high accuracy and low latency, DeepVehicleSense makes use of non-linear filters that can best extract the transportation sound samples. For recognition of five different transportation modes, we design a deep learning based sound classifier using a novel deep neural network architecture with multiple branches. Our staged inference technique can significantly reduce runtime and energy consumption while maintaining high accuracy for the majority of samples. Through 263-hour datasets collected by seven different Android phone models, we demonstrate that DeepVehicleSense achieves the recognition accuracy of 97.44\\% with only sound samples of 2 seconds at the power consumption of 35.08 mW on average for all-day monitoring.
更多
查看译文
关键词
Context-aware computing,activity recognition,transportation mode,deep learning,staged inference,sound data,low power
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要