Unlocking Deep Learning: A BP-Free Approach for Parallel Block-Wise Training of Neural Networks
CoRR(2023)
摘要
Backpropagation (BP) has been a successful optimization technique for deep
learning models. However, its limitations, such as backward- and
update-locking, and its biological implausibility, hinder the concurrent
updating of layers and do not mimic the local learning processes observed in
the human brain. To address these issues, recent research has suggested using
local error signals to asynchronously train network blocks. However, this
approach often involves extensive trial-and-error iterations to determine the
best configuration for local training. This includes decisions on how to
decouple network blocks and which auxiliary networks to use for each block. In
our work, we introduce a novel BP-free approach: a block-wise BP-free (BWBPF)
neural network that leverages local error signals to optimize distinct
sub-neural networks separately, where the global loss is only responsible for
updating the output layer. The local error signals used in the BP-free model
can be computed in parallel, enabling a potential speed-up in the weight update
process through parallel implementation. Our experimental results consistently
show that this approach can identify transferable decoupled architectures for
VGG and ResNet variations, outperforming models trained with end-to-end
backpropagation and other state-of-the-art block-wise learning techniques on
datasets such as CIFAR-10 and Tiny-ImageNet. The code is released at
https://github.com/Belis0811/BWBPF.
更多查看译文
关键词
Local loss,block-wise learning,computer vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要