Constraint guided gradient descent: Training with inequality constraints with applications in regression and semantic segmentation

Neurocomputing(2023)

引用 0|浏览0
暂无评分
摘要
Deep learning is typically performed by learning a neural network solely from data in the form of input–output pairs ignoring available domain knowledge. In this work, the Constraint Guided Gradient Descent (CGGD) framework is proposed that enables the injection of domain knowledge into the training procedure. The domain knowledge is assumed to be described as a conjunction of hard inequality constraints which appears to be a natural choice for several applications. Compared to other neuro-symbolic approaches, the proposed method converges to a point that makes an arbitrarily small error with respect to any inequality constraint on the training data and does not require to first transform the constraints into some ad-hoc term that is added to the learning (optimization) objective. It is empirically shown on four independent and small data sets that CGGD makes training less dependent on the initialization of the network, improves the constraint satisfiability on all data, improves the generalization of the model to unseen data, and relaxes the need for annotated data. Moreover, the method is tested for a regression and a semantic segmentation task.
更多
查看译文
关键词
Neuro-symbolic artificial intelligence,Neural networks,Constrained optimization,Flexible supervision,Learning and reasoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要