On the geometry of output-code multi-class learning.
arXiv: Learning(2015)
摘要
We provide a new perspective on the popular multi-class algorithmic techniques one-vs-all and (error correcting) output-codes. We show that is that in cases where they are successful (at learning from labeled data), these techniques implicitly assume structure on how the classes are related. We show that by making that structure explicit, we can design algorithms to recover the classes based on limited labeled data. We provide results for commonly studied cases where the codewords of the classes are well separated: learning a linear one-vs-all classifier for data on the unit ball and learning a linear error correcting output code when the Hamming distance between the codewords is large (at least $d+1$ in a $d$-dimensional problem). We additionally consider the more challenging case where the codewords are not well separated, but satisfy a boundary features condition.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络