Applying attributes to improve human activity recognition

2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)(2015)

引用 0|浏览7
暂无评分
摘要
Activity and event recognition from video has utilized low-level features over higher-level text-based class attributes and ontologies because they traditionally have been more effective on small datasets. However, by including human knowledge-driven associations between actions and attributes while recognizing the lower-level attributes with their temporal relationships, we can learn a much greater set of activities as well as improve low-level feature-based algorithms by incorporating an expert knowledge ontology. In an event ontology, events can be broken down into actions, and these can be decomposed further into attributes. For example, throwing events can include throwing of stones or baseballs with the object being relocated from a hand through the air to a location of interest. The throwing can be broken down into the many physical attributes that can be used to describe the motion like BodyPartsUsed = Hands, BodyPartArticulation-Arm = OneArmRaisedOverHead, and many others. Building general attributes from video and merging them into an ontology for recognition allows significant reuse for the development of activity and event classifiers. Each activity or event classifier is composed of interacting attributes the same way sentences are composed of interacting letters to create a complete language.
更多
查看译文
关键词
human activity recognition,event recognition,video,higher-level text-based class attributes,ontologies,human knowledge-driven associations,temporal relationships,low-level feature-based algorithms,expert knowledge ontology,physical attributes,event classifiers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要