An Energy Efficient In-Memory Computing Machine Learning Classifier Scheme

Shixiong Jiang, Sheena Ratnam Priya, Naveena Elango,James Clay,Ramalingam Sridhar

2019 32nd International Conference on VLSI Design and 2019 18th International Conference on Embedded Systems (VLSID)(2019)

引用 2|浏览25
暂无评分
摘要
Large-scale machine learning (ML) algorithms require extensive memory interactions. Managing or preventing data movement can significantly increase the speed and efficiency of many ML tasks. Towards this end, we devise an energy efficient in-memory computing kernel for a ML linear classifier and a prototype is designed. Compared with another in-memory computing kernel for ML applications [1], we achieve a power savings of over 6.4 times than a conventional discrete system while improving reliability by 54.67%. We employ a split-data-aware technique to manage process, voltage and temperature variations. We utilize a trimodal architecture with hierarchical tree structure to further decrease power consumption. Our scheme provides a fast, energy efficient, and competitively accurate binary classification kernel.
更多
查看译文
关键词
Machine Learning, Classifier, In-memory computing, Low power, Hybrid, Trimodal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要