An FPGA Accelerator with Efficient Weight Compression by Combining Bit-Level Sparsity and Mixed-Precision Quantization
IEEE Transactions on Circuits and Systems II Express Briefs(2025)
Key words
Bit-level sparsity,mixed-precision,bit-serial computation,deep neural network,FPGA
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined