OmniDRL: An Energy-Efficient Deep Reinforcement Learning Processor With Dual-Mode Weight Compression and Sparse Weight Transposer

IEEE Journal of Solid-State Circuits(2022)

引用 0|浏览13
暂无评分
摘要
In this article, we present an energy-efficient deep reinforcement learning (DRL) processor, OmniDRL, for DRL training on edge devices. Recently, the need for DRL training is growing due to the DRL’s distinct characteristics that can be adapted to each user. However, a massive amount of external and internal memory access limits the implementation of DRL training on resource-constrained platforms. OmniDRL proposes four key features that can reduce external memory access by compressing as much data as possible and can reduce internal memory access by directly processing compressed data. A group-sparse training (GST) enables a high weight compression ratio (CR) for every DRL iteration by selective utilization of weight grouping and weight pruning. A group-sparse training core is proposed to fully take advantage of compressed weight from GST by skipping redundant operations and reusing duplicated data. An exponent-mean-delta encoding additionally compresses the exponent of both weight and feature map for higher CR and low memory power consumption. A world-first on-chip sparse weight transposer enables the DRL training process of compressed weight without off-chip transposer. As a result, OmniDRL is fabricated in a 28-nm CMOS technology and occupies a $3.6\times 3.6$ mm 2 die area. It shows a state-of-the-art peak performance of 4.18 TFLOPS and a peak energy efficiency of 29.3 TFLOPS/W. It achieved 7.42-TFLOPS/W energy efficiency for training robot agent (Mujoco Halfcheetah, TD3), which is 2.4 $\times $ higher than the previous state of the art.
更多
查看译文
关键词
Data compression,deep reinforcement learning (DRL),energy-efficient deep neural network (DNN) application-specific integrated circuit (ASIC),structured weight,transposer,weight pruning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要