3D Object Detection Through Fog and Occlusion: Passive Integral Imaging Vs Active ( LiDAR) Sensing
Optics Express(2022)SCI 2区
Univ Connecticut
Abstract
In this paper, we address the problem of object recognition in degraded environments including fog and partial occlusion. Both long wave infrared (LWIR) imaging systems and LiDAR (time of flight) imaging systems using Azure Kinect, which combine conventional visible and lidar sensing information, have been previously demonstrated for object recognition in ideal conditions. However, the object detection performance of Azure Kinect depth imaging systems may decrease significantly in adverse weather conditions such as fog, rain, and snow. The concentration of fog degrades the depth images of Azure Kinect camera, and the overall visibility of RGBD images (fused RGB and depth image), which can make object recognition tasks challenging. LWIR imaging may avoid these issues of lidar-based imaging systems. However, due to poor spatial resolution of LWIR cameras, thermal imaging provides limited textural information within a scene and hence may fail to provide adequate discriminatory information to identify between objects of similar texture, shape and size. To improve the object detection task in fog and occlusion, we use three-dimensional (3D) integral imaging (InIm) system with a visible range camera. 3D InIm provides depth information, mitigates the occlusion and fog in front of the object, and improves the object recognition capabilities. For object recognition, the YOLOv3 neural network is used for each of the tested imaging systems. Since the concentration of fog affects the images from different sensors (visible, LWIR, and Azure Kinect depth cameras) in different ways, we compared the performance of the network on these images in terms of average precision and average miss rate. For the experiments we conducted, the results indicate that in degraded environment 3D InIm using visible range cameras can provide better image reconstruction as compared to the LWIR camera and Azure Kinect RGBD camera, and therefore it may improve the detection accuracy of the network. To the best of our knowledge, this is the first report comparing the performance of object detection between passive integral imaging system vs active (LiDAR) sensing in degraded environments such as fog and partial occlusion.
MoreTranslated text
Key words
Object detection,Fog,LiDAR sensing,passive imaging,LWIR imaging,three-dimensional integral imaging
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
Focus Issue Introduction: 3D Image Acquisition and Display: Technology, Perception and Applications
OPTICS EXPRESS 2024
被引用2
3D Object Detection Via 2D Segmentation-Based Computational Integral Imaging Applied to a Real Video
SENSORS 2023
被引用3
OPTICS EXPRESS 2023
被引用5
Highly Efficient Broadband Spin-Multiplexed Metadevices for Futuristic Imaging Applications
RESULTS IN PHYSICS 2023
被引用16
Applied optics 2024
被引用0
APPLIED OPTICS 2024
被引用2
OPTICS EXPRESS 2024
被引用5
IEEE ACCESS 2025
被引用1
3D Object Tracking Using Integral Imaging with Mutual Information and Bayesian Optimization.
OPTICS EXPRESS 2024
被引用1
Dual-mode Polarization-Sensitive Tunable Metalens Enabling Bright-Field and Edge-Enhanced Imaging
ADVANCES IN OPTICAL THIN FILMS VIII 2024
被引用0
APPLIED OPTICS 2024
被引用0
Applied Sciences 2024
被引用1
Microwave Detection Towards Marine Climate Monitoring: Fog and Humidity
Sensors and Actuators B Chemical 2024
被引用0
Polarimetric 3D Integral Imaging Profilometry under Degraded Environmental Conditions
OPTICS EXPRESS 2024
被引用1
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话