3D-LaneNet: End-to-End 3D Multiple Lane Detection

2019 IEEE/CVF International Conference on Computer Vision (ICCV)(2019)

引用 146|浏览102
暂无评分
摘要
We introduce a network that directly predicts the 3D layout of lanes in a road scene from a single image. This work marks a first attempt to address this task with on-board sensing without assuming a known constant lane width or relying on pre-mapped environments. Our network architecture, 3D-LaneNet, applies two new concepts: intra-network inverse-perspective mapping (IPM) and anchor-based lane representation. The intra-network IPM projection facilitates a dual-representation information flow in both regular image-view and top-view. An anchor-per-column output representation enables our end-to-end approach which replaces common heuristics such as clustering and outlier rejection, casting lane estimation as an object detection problem. In addition, our approach explicitly handles complex situations such as lane merges and splits. Results are shown on two new 3D lane datasets, a synthetic and a real one. For comparison with existing methods, we test our approach on the image-only tuSimple lane detection benchmark, achieving performance competitive with state-of-the-art.
更多
查看译文
关键词
3D-LaneNet,road scene,single image,on-board sensing,pre-mapped environments,network architecture,intra-network inverse-perspective mapping,anchor-based lane representation,intra-network IPM projection,dual-representation information flow,regular image-view,anchor-per-column output representation,lane estimation,object detection problem,3D lane datasets,image-only tuSimple lane detection benchmark,3D multiple lane detection,regular top-view
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要