Variance Reduction Can Improve Trade-Off in Multi-Objective Learning

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览7
暂无评分
摘要
Many machine learning problems today have multiple objective functions, which are often tackled by the multi-objective learning (MOL) framework. Albeit many encouraging results are obtained by MOL algorithms, a recent theoretical study [1] revealed that these gradient-based MOL methods (e.g., MGDA, CAGrad) all reflect an inherent trade-off between optimization convergence speeds and conflict-avoidance abilities. To this end, we develop an improved stochastic variance-reduced multi-objective gradient correction method for MOL, achieving the ${\mathcal{O}}\left({{\varepsilon ^{ - 1.5}}}\right)$ sample complexity. In addition, our proposed method simultaneously improves the theoretical guarantees for conflict avoidance and convergence rate compared to prior stochastic gradient-based MOL methods in the non-convex setting. We further validate the effectiveness of the proposed method empirically using popular multi-task learning (MTL) benchmarks.
更多
查看译文
关键词
Multi-objective learning,Multi-task learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要