Efficient Decoder-Free Object Detection with Transformers.

European Conference on Computer Vision(2022)

Cited 12|Views216
No score
Abstract
Vision transformers (ViTs) are changing the landscape of object detection approaches. A natural usage of ViTs in detection is to replace the CNN-based backbone with a transformer-based backbone, which is straightforward and effective, with the price of bringing considerable computation burden for inference. More subtle usage is the DETR family, which eliminates the need for many hand-designed components in object detection but introduces a decoder demanding an extra-long time to converge. As a result, transformer-based object detection can not prevail in large-scale applications. To overcome these issues, we propose a novel decoder-free fully transformer-based (DFFT) object detector, achieving high efficiency in both training and inference stages, for the first time. We simplify objection detection into an encoder-only single-level anchor-based dense prediction problem by centering around two entry points: 1) Eliminate the training-inefficient decoder and leverage two strong encoders to preserve the accuracy of single-level feature map prediction; 2) Explore low-level semantic features for the detection task with limited computational resources. In particular, we design a novel lightweight detection-oriented transformer backbone that efficiently captures low-level features with rich semantics based on a well-conceived ablation study. Extensive experiments on the MS COCO benchmark demonstrate that DFFTSMALL outperforms DETR by \(2.5\%\) AP with \(28\%\) computation cost reduction and more than \(10\times \) fewer training epochs. Compared with the cutting-edge anchor-based detector RetinaNet, DFFTSMALL obtains over \(5.5\%\) AP gain while cutting down \(70\%\) computation cost. The code is available at https://github.com/peixianchen/DFFT.
More
Translated text
Key words
Object detector,Transformers,Efficient network
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined