ScribbleNet: Efficient interactive annotation of urban city scenes for semantic segmentation

Pattern Recognition(2023)

引用 2|浏览22
暂无评分
摘要
Annotation is a crucial first step in the semantic segmentation of urban images that facilitates the development of autonomous navigation systems. However, annotating complex urban images is time-consuming and challenging. It requires significant human effort making it expensive and error-prone. To reduce human effort during annotation, multiple images need to be annotated in a short time-span. In this paper, we introduce ScribbleNet, an interactive image segmentation algorithm to address this issue. Our approach provides users with a pre-segmented image that iteratively improves the segmentation using scribble as an annotation input. This method is based on conditional inference and exploits the learnt correlations in a deep neural network (DNN). ScribbleNet can: (1) work with urban city scenes captured in unseen environments, (2) annotate new classes not present in the training set, and (3) correct several labels at once. We compare this method with other interactive segmentation approaches on multiple datasets such as CityScapes, BDD, Mapillary Vistas, KITTI, and IDD. ScribbleNet reduces the annotation time of an image by up to 14.7 × over manual annotation and up to 5.4× over the current approaches. The algorithm is integrated into the publicly available LabelMe image annotation tool and will be released as an open-source software.
更多
查看译文
关键词
05-15,34-49,Annotation,Interactive segmentation,Human-in-the-Loop
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要