SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
arxiv(2024)
摘要
Low-rank adaptation (LoRA) and its variants are widely employed in
fine-tuning large models, including large language models for natural language
processing and diffusion models for computer vision. This paper proposes a
generalized framework called SuperLoRA that unifies and extends different LoRA
variants, which can be realized under different hyper-parameter settings.
Introducing grouping, folding, shuffling, projecting, and tensor factoring,
SuperLoRA offers high flexibility compared with other LoRA variants and
demonstrates superior performance for transfer learning tasks especially in the
extremely few-parameter regimes.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要