WeChat Mini Program
Old Version Features

A 17.1 TOPS/W FP-INT Transformer Inference Accelerator with Sparsity Boosting and Output Importance-Aware Processing

Jeonggyu So, Seongyon Hong, Jiwon Choi, Wooyoung Jo,Sangjin Kim,Hoi-Jun Yoo,Donghyeon Han

2025 IEEE International Symposium on Circuits and Systems (ISCAS)(2025)

Cited 0|Views0
Key words
Digital Processor,Transformer Inference,FPINT,Energy-efficiency,Adder Tree,Booth Encoding,Block Floating Point
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined