Layer-Wise Representation Fusion for Compositional Generalization

Yafang Zheng,Lei Lin, Shuangtao Li, Yuxuan Yuan, Zhaohong Lai, Shan Liu,Biao Fu,Yidong Chen,Xiaodong Shi,

AAAI 2024(2024)

引用 0|浏览11
暂无评分
摘要
Existing neural models are demonstrated to struggle with compositional generalization (CG), i.e., the ability to systematically generalize to unseen compositions of seen components. A key reason for failure on CG is that the syntactic and semantic representations of sequences in both the uppermost layer of the encoder and decoder are entangled. However, previous work concentrates on separating the learning of syntax and semantics instead of exploring the reasons behind the representation entanglement (RE) problem to solve it. We explain why it exists by analyzing the representation evolving mechanism from the bottom to the top of the Transformer layers. We find that the ``shallow'' residual connections within each layer fail to fuse previous layers' information effectively, leading to information forgetting between layers and further the RE problems. Inspired by this, we propose LRF, a novel Layer-wise Representation Fusion framework for CG, which learns to fuse previous layers' information back into the encoding and decoding process effectively through introducing a fuse-attention module at each encoder and decoder layer. LRF achieves promising results on two realistic benchmarks, empirically demonstrating the effectiveness of our proposal. Codes are available at https://github.com/thinkaboutzero/LRF.
更多
查看译文
关键词
NLP: Other,HAI: Other Foundations of Human Computation & AI,ML: Applications,ML: Deep Generative Models & Autoencoders,ML: Deep Neural Architectures and Foundation Models,ML: Representation Learning,NLP: Generation,NLP: Lexical Semantics and Morphology,NLP: Machine Translation, Multilinguality, Cross-Lingual NLP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要