Simulation of a multispectral, multicamera, off-road autonomous vehicle perception system with Virtual Autonomous Navigation Environment (VANE)

Proceedings of SPIE(2015)

引用 3|浏览5
暂无评分
摘要
We present a case-study in using specialized, physics-based software for high-fidelity environment and electro-optical sensor modeling in order to produce simulated sensor data that can be used to train a multi-spectral perception system for unmanned ground vehicle navigation. This case-study used the Virtual Autonomous Navigation Environment (VANE) to simulate filtered, multi-spectral imaging sensors. The VANE utilizes ray-tracing and hyperspectral material properties to capture the sensor-environment interaction. In this study we focus on a digital scene of the ERDC test track in Vicksburg, MS that has extremely detailed representation of the vegetation and ground texture. The scene model is used to generate imagery that simulates the output of specialized terrain perception hardware developed by Southwest Research Institute, which consists of stereo pair of 3-channel cameras. The perception system utilizes stereo processing, the multi-spectral responses, and image texture features in order to create a 3-dimensional world model suitable for offroad vehicle navigation, providing depth information and an estimated terrain class label for every pixel by utilizing machine learning. While the process of training the perception system generally involves hand-labeling data collected through manned missions, the ability to generate data for certain environments and lighting conditions represents an enabling technology for deployment in new theaters. We demonstrate an initial capability to simulate data and train the perception system and present the results compared to the system trained with real-world data from the same location.
更多
查看译文
关键词
Unmanned Ground Vehicles,Multi-spectral Imaging,Modeling and Simulation,Off-road,Robotics,Perception
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要