TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Sematic Tasks
arxiv(2024)
摘要
In this paper, we explore the capabilities of LLMs in capturing
lexical-semantic knowledge from WordNet on the example of the LLaMA-2-7b model
and test it on multiple lexical semantic tasks. As the outcome of our
experiments, we present TaxoLLaMA, the everything-in-one model, lightweight due
to 4-bit quantization and LoRA. It achieves 11 SotA results, 4 top-2 results
out of 16 tasks for the Taxonomy Enrichment, Hypernym Discovery, Taxonomy
Construction, and Lexical Entailment tasks. Moreover, it demonstrates very
strong zero-shot performance on Lexical Entailment and Taxonomy Construction
with no fine-tuning. We also explore its hidden multilingual and domain
adaptation capabilities with a little tuning or few-shot learning. All
datasets, code, and model are available online at
https://github.com/VityaVitalich/TaxoLLaMA
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要