Audio Classification Systems Using Deep Neural Networks And An Event-Driven Auditory Sensor

2019 IEEE SENSORS(2019)

引用 3|浏览21
暂无评分
摘要
We describe ongoing research in developing audio classification systems that use a spiking silicon cochlea as the front end. Event-driven features extracted from the spikes are fed to deep networks for the intended task. We describe a classification task on naturalistic audio sounds using a low-power silicon cochlea that outputs asynchronous events through a send-on-delta encoding of its sharply-tuned cochlea channels. Because of the event-driven nature of the processing, silences in these naturalistic sounds lead to corresponding absence of cochlea spikes and savings in computes. Results show 48% savings in computes with a small loss in accuracy using cochlea events.
更多
查看译文
关键词
event-driven audio, edge computing, spiking cochlea, deep learning, sound classification, low-power cochlea
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要