TUMTraf Event: Calibration and Fusion Resulting in a Dataset for Roadside Event-Based and RGB Cameras
IEEE Transactions on Intelligent Vehicles(2024)
摘要
Event-based cameras are predestined for Intelligent Transportation Systems
(ITS). They provide very high temporal resolution and dynamic range, which can
eliminate motion blur and improve detection performance at night. However,
event-based images lack color and texture compared to images from a
conventional RGB camera. Considering that, data fusion between event-based and
conventional cameras can combine the strengths of both modalities. For this
purpose, extrinsic calibration is necessary. To the best of our knowledge, no
targetless calibration between event-based and RGB cameras can handle multiple
moving objects, nor does data fusion optimized for the domain of roadside ITS
exist. Furthermore, synchronized event-based and RGB camera datasets
considering roadside perspective are not yet published. To fill these research
gaps, based on our previous work, we extended our targetless calibration
approach with clustering methods to handle multiple moving objects.
Furthermore, we developed an early fusion, simple late fusion, and a novel
spatiotemporal late fusion method. Lastly, we published the TUMTraf Event
Dataset, which contains more than 4,111 synchronized event-based and RGB images
with 50,496 labeled 2D boxes. During our extensive experiments, we verified the
effectiveness of our calibration method with multiple moving objects.
Furthermore, compared to a single RGB camera, we increased the detection
performance of up to +9
challenging night with our presented event-based sensor fusion methods. The
TUMTraf Event Dataset is available at
https://innovation-mobility.com/tumtraf-dataset.
更多查看译文
关键词
Event-Based Cameras,RGB Cameras,Sensor Fusion,Targetless Calibration,Multi-modal Dataset,Intelligent Transportation Systems
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要