An Energy Efficient Soft SIMD Microarchitecture and Its Application on Quantized CNNs

Pengbo Yu,Flavio Ponzina, Alexandre Levisse, Mohit Gupta, Dwaipayan Biswas,Giovanni Ansaloni, David Atienza, Francky Catthoor

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS(2024)

引用 0|浏览2
暂无评分
摘要
The ever-increasing computational complexity and energy consumption of today's applications, such as machine learning (ML) algorithms, not only strain the capabilities of the underlying hardware but also significantly restrict their wide deployment at the edge. Addressing these challenges, novel architecture solutions are required by leveraging opportunities exposed by algorithms, e.g., robustness to small-bitwidth operand quantization and high intrinsic data-level parallelism. However, traditional hardware single instruction multiple data (Hard SIMD) architectures only support a small set of operand bitwidths, limiting performance improvement. To fill the gap, this manuscript introduces a novel pipelined processor microarchitecture for arithmetic computing based on the software-defined SIMD (Soft SIMD) paradigm that can define arbitrary SIMD modes through control instructions at run-time. This microarchitecture is optimized for parallel fine-grained fixed-point arithmetic, such as shift/add. It can also efficiently execute sequential shift-add-based multiplication over SIMD subwords, thanks to zero-skipping and canonical signed digit (CSD) coding. A lightweight repacking unit allows changing subword bitwidth dynamically. These features are implemented within a tight energy and area budget. An energy consumption model is established through post-synthesis for performance assessment. We select heterogeneously quantized (HQ) convolutional neural networks (CNNs) from the ML domain as the benchmark and map it onto our microarchitecture. Experimental results showcase that our approach dramatically outperforms traditional Hard SIMD Multiplier-Adder regarding area and energy requirements. In particular, our microarchitecture occupies up to 59.9% less area than a Hard SIMD that supports fewer SIMD bitwidths, while consuming up to 50.1% less energy on average to execute HQ CNNs.
更多
查看译文
关键词
Microarchitecture,Quantization (signal),Hardware,Arithmetic,Encoding,Software,Multiplexing,Canonical signed digit (CSD) coding,data-level parallelism,energy efficient computing,heterogeneously quantized (HQ) convolutional neural networks (CNNs),software-defined single instruction multiple data (Soft SIMD)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要