The mechanism of additive composition

Machine Learning(2017)

引用 9|浏览77
暂无评分
摘要
Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.
更多
查看译文
关键词
Compositional distributional semantics,Bias and variance,Approximation error bounds,Natural language data,Hierarchical Pitman-Yor process
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要