Simple and Effective Transfer Learning for Neuro-Symbolic Integration
CoRR(2024)
摘要
Deep Learning (DL) techniques have achieved remarkable successes in recent
years. However, their ability to generalize and execute reasoning tasks remains
a challenge. A potential solution to this issue is Neuro-Symbolic Integration
(NeSy), where neural approaches are combined with symbolic reasoning. Most of
these methods exploit a neural network to map perceptions to symbols and a
logical reasoner to predict the output of the downstream task. These methods
exhibit superior generalization capacity compared to fully neural
architectures. However, they suffer from several issues, including slow
convergence, learning difficulties with complex perception tasks, and
convergence to local minima. This paper proposes a simple yet effective method
to ameliorate these problems. The key idea involves pretraining a neural model
on the downstream task. Then, a NeSy model is trained on the same task via
transfer learning, where the weights of the perceptual part are injected from
the pretrained network. The key observation of our work is that the neural
network fails to generalize only at the level of the symbolic part while being
perfectly capable of learning the mapping from perceptions to symbols. We have
tested our training strategy on various SOTA NeSy methods and datasets,
demonstrating consistent improvements in the aforementioned problems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要