Linguistic Knowledge Can Enhance Encoder-Decoder Models (If You Let It)
CoRR(2024)
摘要
In this paper, we explore the impact of augmenting pre-trained
Encoder-Decoder models, specifically T5, with linguistic knowledge for the
prediction of a target task. In particular, we investigate whether fine-tuning
a T5 model on an intermediate task that predicts structural linguistic
properties of sentences modifies its performance in the target task of
predicting sentence-level complexity. Our study encompasses diverse experiments
conducted on Italian and English datasets, employing both monolingual and
multilingual T5 models at various sizes. Results obtained for both languages
and in cross-lingual configurations show that linguistically motivated
intermediate fine-tuning has generally a positive impact on target task
performance, especially when applied to smaller models and in scenarios with
limited data availability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要