DifFAR: Differentiable Frequency-based Disentanglement for Aerial Video Action Recognition

arxiv(2023)

引用 1|浏览24
暂无评分
摘要
We present a learning algorithm, DifFAR, for human activity recognition in videos. Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras that contain a human actor along with background motion. Typically, the human actors occupy less than one-tenth of the spatial resolution. DifFAR simultaneously harnesses the benefits of frequency domain representations, a classical analysis tool in signal processing, and data driven neural networks. We build a differentiable static-dynamic frequency mask prior to model the salient static and dynamic pixels in the video, crucial for the underlying task of action recognition. We use this differentiable mask prior to enable the neural network to intrinsically learn disentangled feature representations via an identity loss function. Our formulation empowers the network to inherently compute disentangled salient features within its layers. Further, we propose a cost-function encapsulating temporal relevance and spatial content to sample the most important frame within uniformly spaced video segments. We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset and demonstrate relative improvements of 5.72% - 13.00% over the state-of-the-art and 14.28% - 38.05% over the corresponding baseline model.
更多
查看译文
关键词
aerial video action recognition,background motion,classical analysis tool,cost-function,DifFAR,differentiable frequency-based disentanglement,differentiable mask,disentangled feature representations,disentangled salient features,dynamic pixels,frequency domain representations,human activity recognition,human actor,identity loss function,learning algorithm,neural network,obliquely placed dynamic cameras,salient static pixels,signal processing,spatial content,spatial resolution,static-dynamic frequency mask,UAV Human dataset,UAV videos,underlying task,video segments
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要