ApproxABFT: Approximate Algorithm-Based Fault Tolerance for Vision Transformers

arxiv(2023)

引用 1|浏览23
暂无评分
摘要
Vision Transformers (ViTs) with outstanding performance becomes a popular backbone of deep learning models for the main-stream vision tasks including classification, object detection, and segmentation. Other than the performance, reliability is also a critical metric for the adoption of ViTs in safety-critical applications such as autonomous driving and robotics. With the observation that the major computing blocks in ViTs such as multi-head attention and feed forward are usually performed with general matrix multiplication (GEMM), we propose to adopt a classical algorithm-based fault tolerance (ABFT) strategy originally developed for GEMM to protect ViTs against soft errors in the underlying computing engines. Unlike classical ABFT that will invoke the expensive error recovery procedure whenever computing errors are detected, we leverage the inherent fault-tolerance of ViTs and propose an approximate ABFT, namely ApproxABFT, to invoke the error recovery procedure only when the computing errors are significant enough, which skips many useless error recovery procedures and simplifies the overall GEMM error recovery. According to our experiments, ApproxABFT reduces the computing overhead by 25.92% to 81.62% and improves the model accuracy by 2.63% to 72.56% compared to the baseline ABFT.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络