Learning and inference with constraints

AAAI(2008)

引用 91|浏览119
暂无评分
摘要
Probabilistic modeling has been a dominant approach in Machine Learning research. As the field evolves, thc problems of interest become increasingly challenging and complex. Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. However, incorporating nonlocal depcndencies in a probabilistic model can lead to intractable training and inference. This paper presents Constraints Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. We further show that declarative constraints can be used to take advantage of unlabeled data when training the probabilistic model.
更多
查看译文
关键词
probabilistic modeling,augments probabilistic model,complex decision,probabilistic model,constraints conditional models,expressive dependency structure,expressive output space,intractable training,machine learning research,declarative constraint,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要