Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
CoRR(2024)
摘要
Recently the state space models (SSMs) with efficient hardware-aware designs,
i.e., Mamba, have shown great potential for long sequence modeling. Building
efficient and generic vision backbones purely upon SSMs is an appealing
direction. However, representing visual data is challenging for SSMs due to the
position-sensitivity of visual data and the requirement of global context for
visual understanding. In this paper, we show that the reliance of visual
representation learning on self-attention is not necessary and propose a new
generic vision backbone with bidirectional Mamba blocks (Vim), which marks the
image sequences with position embeddings and compresses the visual
representation with bidirectional state space models. On ImageNet
classification, COCO object detection, and ADE20k semantic segmentation tasks,
Vim achieves higher performance compared to well-established vision
transformers like DeiT, while also demonstrating significantly improved
computation memory efficiency. For example, Vim is 2.8× faster than
DeiT and saves 86.8
features on images with a resolution of 1248×1248. The results
demonstrate that Vim is capable of overcoming the computation memory
constraints on performing Transformer-style understanding for high-resolution
images and it has great potential to become the next-generation backbone for
vision foundation models. Code is available at https://github.com/hustvl/Vim.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要