Effective and Efficient Batch Normalization Using a Few Uncorrelated Data for Statistics Estimation

IEEE Transactions on Neural Networks and Learning Systems(2021)

引用 17|浏览229
Deep neural networks (DNNs) thrive in recent years, wherein batch normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the huge reduction and elementwise operations that are hard to be executed in parallel, which heavily reduces the training speed. To address this issue, in this article, we propose a methodology to alleviate the BN's cost by using only a few sampled or generated data for mean and variance estimation at each iteration. The key challenge to reach this goal is how to achieve a satisfactory balance between normalization effectiveness and execution efficiency. We identify that the effectiveness expects less data correlation in sampling while the efficiency expects more regular execution patterns. To this end, we design two categories of approach: sampling or creating a few uncorrelated data for statistics' estimation with certain strategy constraints. The former includes “batch sampling (BS)” that randomly selects a few samples from each batch and “feature sampling (FS)” that randomly selects a small patch from each feature map of all samples, and the latter is “virtual data set normalization (VDN)” that generates a few synthetic random samples to directly create uncorrelated data for statistics' estimation. Accordingly, multiway strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime. The proposed methods are comprehensively evaluated on various DNN models, where the loss of model accuracy and the convergence rate are negligible. Without the support of any specialized libraries, 1.98× BN layer acceleration and 23.2% overall training speedup can be practically achieved on modern GPUs. Furthermore, our methods demonstrate powerful performance when solving the well-known “micro-BN” problem in the case of a tiny batch size. This article provides a promising solution for the efficient training of high-performance DNNs.
Batch normalization (BN),data sampling and generating,deep network acceleration,uncorrelated data
AI 理解论文
Chat Paper