SVQ-ACT: Querying for Actions over Videos.

Daren Chao, Kaiwen Chen,Nick Koudas

ICDE(2023)

引用 0|浏览7
暂无评分
摘要
We present SVQ-ACT, a system capable of evaluating declarative action and object queries over input videos. Our approach is independent of the underlying object and action detection models utilized. Users may issue queries involving action and specific objects (e.g., a human riding a bicycle, close to a traffic light and a car left of the bicycle) and identify video clips that satisfy query constraints. Our system is capable of operating in two main settings, namely online and offline. In the online setting, the user specifies a video source (e.g., a surveillance video) and a declarative query containing an action and object predicates. Our system will identify and label in real-time all frame sequences that match the query. In the offline mode, the system accepts a video repository as input, preprocesses all the video in an offline manner and extracts suitable metadata. Following this step, users can execute any query they wish interactively on the video repository (containing actions and objects supported by the underlying detection models) to identify sequences of frames from videos that satisfy the query. In this case, to limit the number of results produced, we introduce novel result ranking algorithms that can produce the k most relevant results efficiently.We demonstrate that SVQ-ACT can correctly capture the desired query semantics and execute queries efficiently and correctly, delivering a high degree of accuracy.
更多
查看译文
关键词
containing actions,declarative action,declarative query,desired query semantics,input videos,object predicates,object queries,query constraints,surveillance video,SVQ-ACT,underlying detection models,underlying object,video clips,video repository,video source
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要