Shifted Autoencoders for Point Annotation Restoration in Object Counting
arxiv(2023)
摘要
Object counting typically uses 2D point annotations. The complexity of object
shapes and the subjectivity of annotators may lead to annotation inconsistency,
potentially confusing counting model training. Some sophisticated
noise-resistance counting methods have been proposed to alleviate this issue.
Differently, we aim to directly refine the initial point annotations before
training counting models. For that, we propose the Shifted Autoencoders (SAE),
which enhances annotation consistency. Specifically, SAE applies random shifts
to initial point annotations and employs a UNet to restore them to their
original positions. Similar to MAE reconstruction, the trained SAE captures
general position knowledge and ignores specific manual offset noise. This
allows to restore the initial point annotations to more general and thus
consistent positions. Extensive experiments show that using such refined
consistent annotations to train some advanced (including noise-resistance)
object counting models steadily/significantly boosts their performances.
Remarkably, the proposed SAE helps to set new records on nine datasets. We will
make codes and refined point annotations available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要