Fusioncount: Efficient Crowd Counting Via Multiscale Feature Fusion

arxiv(2022)

引用 8|浏览16
暂无评分
摘要
State-of-the-art crowd counting models follow an encoder-decoder approach. Images are first processed by the encoder to extract features. Then, to account for perspective distortion, the highest-level feature map is fed to extra components to extract multiscale features, which are the input to the decoder to generate crowd densities. However, in these methods, features extracted at earlier stages during encoding are underutilised, and the multiscale modules can only capture a limited range of receptive fields, albeit with considerable computational cost. This paper proposes a novel crowd counting architecture (FusionCount), which exploits the adaptive fusion of a large majority of encoded features instead of relying on additional extraction components to obtain multiscale features. Thus, it can cover a more extensive scope of receptive field sizes and lower the computational cost. We also introduce a new channel reduction block, which can extract saliency information during decoding and further enhance the model’s performance. Experiments on two benchmark databases demonstrate that our model achieves state-of-the-art results with reduced computational complexity. PyTorch implementation of the model and weights trained on these two datasets are available at https://github.com/YimingMa/FusionCount.
更多
查看译文
关键词
crowd,fusioncount
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要