Binarizing Weights Wisely for Edge Intelligence: Guide for Partial Binarization of Deconvolution-Based Generators

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2020)

引用 3|浏览37
暂无评分
摘要
This article explores the weight binarization of the deconvolution-based generator in a generative adversarial network (GAN) for memory saving and speedup of image construction on the edge. This article suggests that different from convolutional neural networks (including the discriminator) where all layers can be binarized, only some of the layers in the generator can be binarized without significant performance loss. Supported by theoretical analysis and verified by experiments, a direct metric based on the dimension of deconvolution operations is established, which can be used to quickly decide which layers in a generator can be binarized. Our results also indicate that both the generator and the discriminator should be binarized simultaneously for balanced competition and better performance during training. The experimental results on CelebA dataset with DCGAN and original loss functions suggest that directly applying state-of-the-art binarization techniques to all the layers of the generator will lead to $2.83\times $ performance loss measured by sliced Wasserstein distance compared with the original generator, while applying them to selected layers only can yield up to $25.81\times $ saving in memory consumption, and $1.96\times $ and $1.32\times $ speedup in inference and training, respectively, with little performance loss. Similar conclusions can also be drawn on other loss functions for different GANs.
更多
查看译文
关键词
Binarization,compact model,compression,deconvolution,generative adversarial network (GAN)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要