Scaling Novel Object Detection with Weakly Supervised Detection Transformers

2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)(2023)

引用 2|浏览70
暂无评分
摘要
A critical object detection task is finetuning an existing model to detect novel objects, but the standard workflow requires bounding box annotations which are time-consuming and expensive to collect. Weakly supervised object detection (WSOD) offers an appealing alternative, where object detectors can be trained using image-level labels. However, the practical application of current WSOD models is limited, as they only operate at small data scales and require multiple rounds of training and refinement. To address this, we propose the Weakly Supervised Detection Transformer, which enables efficient knowledge transfer from a large-scale pretraining dataset to WSOD finetuning on hundreds of novel objects. Additionally, we leverage pretrained knowledge to improve the multiple instance learning (MIL) framework often used in WSOD methods. Our experiments show that our approach outperforms previous state-of-the-art models on large-scale novel object detection datasets, and our scaling study reveals that class quantity is more important than image quantity for WSOD pretraining.
更多
查看译文
关键词
Algorithms: Image recognition and understanding (object detection,categorization,segmentation),Machine learning architectures,formulations,and algorithms (including transfer,low-shot,semi-,self-,and un-supervised learning)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要