ML-CGRA: An Integrated Compilation Framework to Enable Efficient Machine Learning Acceleration on CGRAs

2023 60th ACM/IEEE Design Automation Conference (DAC)(2023)

引用 0|浏览5
暂无评分
摘要
Coarse-Grained Reconfigurable Arrays (CGRAs) can achieve higher energy-efficiency than general-purpose processors and accelerators or fine-grained reconfigurable devices, while maintaining adaptability to different computational patterns. CGRAs have shown some success as a platform to accelerate machine learning (ML) thanks to their flexibility, which allows them to support new models not considered by fixed accelerators. However, current solutions for CGRAs employ low level instruction-based compiler approaches and lack specialized compilation infrastructures from high-level ML frameworks that could leverage semantic information from the models, limiting the ability to efficiently map them on the reconfigurable substrate. This paper proposes ML-CGRA, an integrated compilation framework based on the MLIR infrastructure that enables efficient ML acceleration on CGRAs. ML-CGRA provides an end-to-end solution for mapping ML models on CGRAs that outperforms conventional approaches by 3.15× and 6.02 × on 4×4 and 8×8 CGRAs, respectively. The framework is open-source and available from https://github.com/tancheng/mlir-cgra.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要