Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies(2023)

引用 7|浏览47
暂无评分
摘要
Millimeter wave radar is a promising sensing modality for enabling pervasive and privacy-preserving human sensing. However, the lack of large-scale radar datasets limits the potential of training deep learning models to achieve generalization and robustness. To close this gap, we resort to designing a software pipeline that leverages wealthy video repositories to generate synthetic radar data, but it confronts key challenges including i) multipath reflection and attenuation of radar signals among multiple humans, ii) unconvertible generated data leading to poor generality for various applications, and iii) the class-imbalance issue of videos leading to low model stability. To this end, we design Midas to generate realistic, convertible radar data from videos via two components: (i) a data generation network (DG-Net) combines several key modules, depth prediction, human mesh fitting and multi-human reflection model, to simulate the multipath reflection and attenuation of radar signals to output convertible coarse radar data, followed by a Transformer model to generate realistic radar data; (ii) a variant Siamese network (VS-Net) selects key video clips to eliminate data redundancy for addressing the class-imbalance issue. We implement and evaluate Midas with video data from various external data sources and real-world radar data, demonstrating its great advantages over the state-of-the-art approach for both activity recognition and object detection tasks.
更多
查看译文
关键词
cross domain translation,data generation,human activity recognition,radar sensing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要