VTR: An Optimized Vision Transformer for SAR ATR Acceleration on FPGA
Image Sensing Technologies: Materials, Devices, Systems, and Applications XI(2024)
摘要
Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) is a key
technique used in military applications like remote-sensing image recognition.
Vision Transformers (ViTs) are the current state-of-the-art in various computer
vision applications, outperforming their CNN counterparts. However, using ViTs
for SAR ATR applications is challenging due to (1) standard ViTs require
extensive training data to generalize well due to their low locality; the
standard SAR datasets, however, have a limited number of labeled training data
which reduces the learning capability of ViTs; (2) ViTs have a high parameter
count and are computation intensive which makes their deployment on
resource-constrained SAR platforms difficult. In this work, we develop a
lightweight ViT model that can be trained directly on small datasets without
any pre-training by utilizing the Shifted Patch Tokenization (SPT) and Locality
Self-Attention (LSA) modules. We directly train this model on SAR datasets
which have limited training samples to evaluate its effectiveness for SAR ATR
applications. We evaluate our proposed model, that we call VTR (ViT for SAR
ATR), on three widely used SAR datasets: MSTAR, SynthWakeSAR, and GBSAR.
Further, we propose a novel FPGA accelerator for VTR, in order to enable
deployment for real-time SAR ATR applications.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要