Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference
CoRR(2023)
摘要
Analog In-Memory Computing (AIMC) is a promising approach to reduce the
latency and energy consumption of Deep Neural Network (DNN) inference and
training. However, the noisy and non-linear device characteristics, and the
non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be
deployed on such hardware to achieve equivalent accuracy to digital computing.
In this tutorial, we provide a deep dive into how such adaptations can be
achieved and evaluated using the recently released IBM Analog Hardware
Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit.
The AIHWKit is a Python library that simulates inference and training of DNNs
using AIMC. We present an in-depth description of the AIHWKit design,
functionality, and best practices to properly perform inference and training.
We also present an overview of the Analog AI Cloud Composer, a platform that
provides the benefits of using the AIHWKit simulation in a fully managed cloud
setting along with physical AIMC hardware access, freely available at
https://aihw-composer.draco.res.ibm.com. Finally, we show examples on how users
can expand and customize AIHWKit for their own needs. This tutorial is
accompanied by comprehensive Jupyter Notebook code examples that can be run
using AIHWKit, which can be downloaded from
https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.
更多查看译文
关键词
hardware,acceleration,in-memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要