Event Vision for Autonomous Off-Road Navigation

Studies in computational intelligence(2023)

引用 0|浏览1
暂无评分
摘要
Robotic automation has always been employed to optimize tasks that are deemed repetitive or hazardous for humans. One instance of such an application is within transportation, be it in urban environments or other harsh applications. In said scenarios, it is required for the platform’s operator to be at a heightened level of awareness at all times to ensure the safety of on-board materials being transported. Additionally, during longer journeys it is often the case that the driver might also be required to traverse difficult terrain under extreme conditions. For instance, low light, fog, or haze-ridden paths. To counter this issue, recent studies have proven that the assistance of smart systems is necessary to minimize the risk involved. In order to develop said systems, this chapter discusses a concept of a Deep Learning (DL) based Vision Navigation (VN) approach capable of terrain analysis and determining the appropriate steering angle within a margin of safety. Within the framework of Neuromorphic Vision (NV) and Event Cameras (EC), the proposed concept is tackling several issues within the development of autonomous systems. In particular, the use of Transformer based backbone for off-road depth estimation using an event camera for better accuracy result and processing time. The implementation of the above mentioned deep learning system, using event camera is leveraged through the necessary data processing techniques of the events prior to the training phase. Besides, binary convolutions (BN) and alternately spiking convolution paradigms using the latest technology trend have been deployed as acceleration methods, with efficiency in terms of energy latency, and environmental robustness. Initial results hold promising potential for the future development of real-time projects with event cameras.
更多
查看译文
关键词
event,vision,off-road
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要