Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning
CoRR(2024)
摘要
As machine learning becomes more prominent there is a growing demand to
perform several inference tasks in parallel. Running a dedicated model for each
task is computationally expensive and therefore there is a great interest in
multi-task learning (MTL). MTL aims at learning a single model that solves
several tasks efficiently. Optimizing MTL models is often achieved by computing
a single gradient per task and aggregating them for obtaining a combined update
direction. However, these approaches do not consider an important aspect, the
sensitivity in the gradient dimensions. Here, we introduce a novel gradient
aggregation approach using Bayesian inference. We place a probability
distribution over the task-specific parameters, which in turn induce a
distribution over the gradients of the tasks. This additional valuable
information allows us to quantify the uncertainty in each of the gradients
dimensions, which can then be factored in when aggregating them. We empirically
demonstrate the benefits of our approach in a variety of datasets, achieving
state-of-the-art performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要