Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models
CoRR(2024)
摘要
Finetuning large language models (LLMs) has been empirically effective on a
variety of downstream tasks. Existing approaches to finetuning an LLM either
focus on parameter-efficient finetuning, which only updates a small number of
trainable parameters, or attempt to reduce the memory footprint during the
training phase of the finetuning. Typically, the memory footprint during
finetuning stems from three contributors: model weights, optimizer states, and
intermediate activations. However, existing works still require considerable
memory and none can simultaneously mitigate memory footprint for all three
sources. In this paper, we present Quantized Side Tuing (QST), which enables
memory-efficient and fast finetuning of LLMs by operating through a dual-stage
process. First, QST quantizes an LLM's model weights into 4-bit to reduce the
memory footprint of the LLM's original weights; QST also introduces a side
network separated from the LLM, which utilizes the hidden states of the LLM to
make task-specific predictions. Using a separate side network avoids performing
backpropagation through the LLM, thus reducing the memory requirement of the
intermediate activations. Furthermore, QST leverages several low-rank adaptors
and gradient-free downsample modules to significantly reduce the trainable
parameters, so as to save the memory footprint of the optimizer states.
Experiments show that QST can reduce the total memory footprint by up to 2.3
× and speed up the finetuning process by up to 3 × while
achieving competent performance compared with the state-of-the-art. When it
comes to full finetuning, QST can reduce the total memory footprint up to 7
×.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要