Enhancing Scene Simulation for Autonomous Driving with Neural Point Rendering.

Junqing Yang, Yuxi Yan,Shitao Chen,Nanning Zheng

2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)(2023)

引用 0|浏览5
暂无评分
摘要
Simulation plays a critical role in the development and testing of autonomous driving, which encounters significant challenges when synthesizing complex driving scenarios and realistic sensor information. Existing scene simulation methods either fail to capture intricate physical characteristics of the 3D world or struggle to extend to autonomous driving datasets with uneven distribution of viewpoints. This paper proposes a point-based neural rendering approach to reconstruct and extend scenes, thereby generating real-world test data for autonomous driving systems from various views. By utilizing collected LiDAR data and filling in sparse regions in the point cloud, accurate depth and position references are provided. Additionally, the neural descriptor is enhanced by incorporating supplementary features relying on the observation views and sampling frequency, while rendering multi-scale descriptions to capture comprehensive information about the scene's appearance. Experimental results demonstrate that our method achieves high-quality rendering for large-scale autonomous driving scenes and enables scene editing to synthesize more diverse and adaptable testing scenes.
更多
查看译文
关键词
Autonomous Vehicles,Simulated Scene,Sampling Frequency,Real-world Data,Point Cloud,Position Of Point,Accurate Depth,Sparse Regions,Quantitative Comparison,3D Reconstruction,Data-driven Methods,Peak Signal-to-noise Ratio,Ablation Experiments,Image Edge,3D Point Cloud,Computer Graphics,Point Cloud Data,3D Scene,Outdoor Scenes,Scene Representation,KITTI Dataset,View Synthesis,Sparse Point Cloud,Dynamic Obstacles,Point Cloud Features,Visual Simulation,Image Pyramid,Multi-view Images,Comparative Experiments,Manual Design
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要