You Only Look at Once for Real-time and Generic Multi-Task

CoRR(2023)

引用 0|浏览21
暂无评分
摘要
High precision, lightweight, and real-time responsiveness are three essential requirements for implementing autonomous driving. Considering all of them simultaneously is a challenge. In this study, we present an adaptive, real-time, and lightweight multi-task model designed to concurrently handle object detection, drivable area segmentation, and lane detection tasks. To achieve this research objective, we developed an end-to-end multi-task model with a unified and streamlined segmentation structure. Our model operates without the need for any specific customization structure or loss function. We achieved competitive results on the BDD100k dataset, particularly in visualization outcomes. The performance results show a mAP50 of 81.1% for object detection, a mIoU of 91.0% for drivable area segmentation, and an IoU of 28.8% for lane line segmentation. Additionally, we introduced a real-road dataset to evaluate our model's performance in a real scene, which significantly outperforms competitors. This demonstrates that our model not only exhibits competitive performance but is also more flexible and faster than existing multi-task models. The source codes and pre-trained models are released at https://github.com/JiayuanWang-JW/YOLOv8-multi-task
更多
查看译文
关键词
real-time,multi-task
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要