Controllable Prompt Tuning For Balancing Group Distributional Robustness
arxiv(2024)
摘要
Models trained on data composed of different groups or domains can suffer
from severe performance degradation under distribution shifts. While recent
methods have largely focused on optimizing the worst-group objective, this
often comes at the expense of good performance on other groups. To address this
problem, we introduce an optimization scheme to achieve good performance across
groups and find a good solution for all without severely sacrificing
performance on any of them. However, directly applying such optimization
involves updating the parameters of the entire network, making it both
computationally expensive and challenging. Thus, we introduce Controllable
Prompt Tuning (CPT), which couples our approach with prompt-tuning techniques.
On spurious correlation benchmarks, our procedures achieve state-of-the-art
results across both transformer and non-transformer architectures, as well as
unimodal and multimodal data, while requiring only 0.4
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要