Implicit Regularization of Gradient Flow on One-Layer Softmax Attention
arxiv(2024)
摘要
We study gradient flow on the exponential loss for a classification problem
with a one-layer softmax attention model, where the key and query weight
matrices are trained separately. Under a separability assumption on the data,
we show that when gradient flow achieves the minimal loss value, it further
implicitly minimizes the nuclear norm of the product of the key and query
weight matrices. Such implicit regularization can be described by a Support
Vector Machine (SVM) problem with respect to the attention weights. This
finding contrasts with prior results showing that the gradient descent induces
an implicit regularization on the Frobenius norm on the product weight matrix
when the key and query matrices are combined into a single weight matrix for
training. For diagonal key and query matrices, our analysis builds upon the
reparameterization technique and exploits approximate KKT conditions of the SVM
associated with the classification data. Moreover, the results are extended to
general weights configurations given proper alignment of the weight matrices'
singular spaces with the data features at initialization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要