Learning From Disagreements: Discriminative Performance Evaluation

msra(2009)

引用 23|浏览7
暂无评分
摘要
Selecting test cases in order to evaluate computer vision methods is important, albeit has not been addressed before. If the methods are evaluated on examples on which they per- form very well or very poorly then no reliable conclusions can be made regarding the superiority of one method ver- sus the others. In this paper we put forth the idea that al- gorithms should be evaluated on test cases they disagree most. We present a simple method which identifies the test cases that should be taken into account when comparing two algorithms and at the same time assesses the statisti- cal significance of the differences in performance. We em- ploy our methodology to compare two object detection al- gorithms and demonstrate its usefulness in enhancing the differences between the methods.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要