GlimpseData: towards continuous vision-based personal analytics.

MOBISYS(2014)

引用 12|浏览101
暂无评分
摘要
ABSTRACTEmerging wearable devices provide a new opportunity for mobile context-aware applications to use continuous audio/video sensing data as primitive inputs. Due to the high-datarate and compute-intensive nature of the inputs, it is important to design frameworks and applications to be efficient. We present the GlimpseData framework to collect and analyze data for studying continuous high-datarate mobile perception. As a case study, we show that we can use low-powered sensors as a filter to avoid sensing and processing video for face detection. Our relatively simple mechanism avoids processing roughly 60% of video frames while missing only 10% of frames with faces in them.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要