Activity recognition from a wearable camera

ICARCV(2012)

引用 25|浏览17
暂无评分
摘要
This paper proposes a novel activity recognition approach from video data obtained with a wearable camera. The objective is to recognise the user's activities from a tiny front-facing camera embedded in his/her glasses. Our system allows carers to remotely access the current status of a specified person, which can be broadly applied to those living with disabilities including the elderly who require cognitive assistance or guidance for daily activities. We collected, trained and tested our system on videos collected from different environmental settings. Sequences of four basic activities (drinking, walking, going upstairs and downstairs) are tested and evaluated in challenging real-world scenarios. An optical flow procedure is used as our primary feature extraction method, from which we downsize, reformat and classify sequence of activities using k-Nearest Neighbour algorithm (k-NN), LogitBoost (on Decision Stumps) and Support Vector Machine (SVM). We suggest the optimal settings of these classifiers through cross-validations and achieve an accuracy of 54.2% to 71.9%. Further smoothing using Hidden Markov Model (HMM) improves the result to 68.5%-82.1%.
更多
查看译文
关键词
video signal processing,cognition,sequence downsize,wearable computers,sequence reformat,cognitive guidance,geriatrics,k-nn,k-nearest neighbour algorithm,svm,feature extraction method,tiny front-facing camera,cognitive assistance,logitboost,decision stumps,activity recognition approach,optical flow procedure,image sensors,feature extraction,image classification,video data,support vector machine,image sequences,decision theory,object recognition,assisted living,wearable camera,sequence classification,hidden markov models,handicapped aids,elderly,support vector machines,hidden markov model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要