LoRA Meets Dropout under a Unified Framework
arxiv(2024)
摘要
With the remarkable capabilities, large language models (LLMs) have emerged
as essential elements in numerous NLP applications, while parameter-efficient
finetuning, especially LoRA, has gained popularity as a lightweight approach
for model customization. Meanwhile, various dropout methods, initially designed
for full finetuning with all the parameters updated, alleviates overfitting
associated with excessive parameter redundancy. Hence, a possible contradiction
arises from negligible trainable parameters of LoRA and the effectiveness of
previous dropout methods, which has been largely overlooked. To fill this gap,
we first confirm that parameter-efficient LoRA is also overfitting-prone. We
then revisit transformer-specific dropout methods, and establish their
equivalence and distinctions mathematically and empirically. Building upon this
comparative analysis, we introduce a unified framework for a comprehensive
investigation, which instantiates these methods based on dropping position,
structural pattern and compensation measure. Through this framework, we reveal
the new preferences and performance comparisons of them when involved with
limited trainable parameters. This framework also allows us to amalgamate the
most favorable aspects into a novel dropout method named HiddenKey. Extensive
experiments verify the remarkable superiority and sufficiency of HiddenKey
across multiple models and tasks, which highlights it as the preferred approach
for high-performance and parameter-efficient finetuning of LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要