Implementation of a Binary Neural Network on a Passive Array of Magnetic Tunnel Junctions

PHYSICAL REVIEW APPLIED(2022)

引用 2|浏览49
暂无评分
摘要
The increasing scale of neural networks and their growing application space have produced demand for more energy-and memory-efficient artificial-intelligence-specific hardware. Avenues to mitigate the main issue, the von Neumann bottleneck, include in-memory and near-memory architectures, as well as algorithmic approaches. Here we leverage the low-power and the inherently binary operation of mag-netic tunnel junctions (MTJs) to demonstrate neural network hardware inference based on passive arrays of MTJs. In general, transferring a trained network model to hardware for inference is confronted by degradation in performance due to device-to-device variations, write errors, parasitic resistance, and non-idealities in the substrate. To quantify the effect of these hardware realities, we benchmark 300 unique weight matrix solutions of a two-layer perceptron to classify the Wine dataset for both classification accu-racy and write fidelity. Despite device imperfections, we achieve software-equivalent accuracy of up to 95.3% with proper tuning of network parameters in 15 ?? 15 MTJ arrays having a range of device sizes. The success of this tuning process shows that new metrics are needed to characterize the performance and quality of networks reproduced in mixed signal hardware.
更多
查看译文
关键词
Magnetic tunnel junctions,neural networks,vector matrix multiplication,inference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要