Language Models as Hierarchy Encoders
CoRR(2024)
摘要
Interpreting hierarchical structures latent in language is a key limitation
of current language models (LMs). While previous research has implicitly
leveraged these hierarchies to enhance LMs, approaches for their explicit
encoding are yet to be explored. To address this, we introduce a novel approach
to re-train transformer encoder-based LMs as Hierarchy Transformer encoders
(HiTs), harnessing the expansive nature of hyperbolic space. Our method
situates the output embedding space of pre-trained LMs within a Poincaré ball
with a curvature that adapts to the embedding dimension, followed by
re-training on hyperbolic cluster and centripetal losses. These losses are
designed to effectively cluster related entities (input as texts) and organise
them hierarchically. We evaluate HiTs against pre-trained and fine-tuned LMs,
focusing on their capabilities in simulating transitive inference, predicting
subsumptions, and transferring knowledge across hierarchies. The results
demonstrate that HiTs consistently outperform both pre-trained and fine-tuned
LMs in these tasks, underscoring the effectiveness and transferability of our
re-trained hierarchy encoders.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要