Benchmarking Human Performance in Semi-Automated Image Segmentation

Interacting with Computers(2020)

引用 2|浏览12
暂无评分
摘要
Semi-automated segmentation algorithms hold promise for improving extraction and identification of objects in images such as tumors in medical images of human tissue, counting plants or flowers for crop yield prediction or other tasks where object numbers and appearance vary from image to image. By blending markup from human annotators to algorithmic classifiers, the accuracy and reproducability of image segmentation can be raised to very high levels. At least, that is the promise of this approach, but the reality is less than clear. In this paper, we review the state-of-the-art in semi-automated image segmentation performance assessment and demonstrate it to be lacking the level of experimental rigour needed to ensure that claims about algorithm accuracy and reproducability can be considered valid. We follow this review with two experiments that vary the type of markup that annotators make on images, either points or strokes, in tightly controlled experimental conditions in order to investigate the effect that this one particular source of variation has on the accuracy of these types of systems. In both experiments, we found that accuracy substantially increases when participants use a stroke-based interaction. In light of these results, the validity of claims about algorithm performance are brought into sharp focus, and we reflect on the need for a far more control on variables for benchmarking the impact of annotators and their context on these types of systems.
更多
查看译文
关键词
semi-automatic,image,segmentation,human workload,accuracy,reproducability,evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要