Chrome Extension
WeChat Mini Program
Use on ChatGLM

A 28nm 0.22μj/token Memory-Compute-Intensity-Aware CNN-Transformer Accelerator with Hybrid-Attention-Based Layer-Fusion and Cascaded Pruning for Semantic-Segmentation

IEEE International Solid-State Circuits Conference(2025)

Cited 0|Views6
Key words
Energy Consumption,Decoding,Sparsity,Receptive Field,Transformer Model,Computational Overhead,CNN Model,Open Reduction,Semantic Segmentation Task,Hardware Accelerators,Language Processing Tasks,External Access,Left Matrix,Convolutional Weights,Backbone Segments
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined