Learning In The Presence Of Finitely Or Infinitely Many Irrelevant Attributes

PROCEEDINGS OF THE FOURTH ANNUAL WORKSHOP ON COMPUTATIONAL LEARNING THEORY(1991)

引用 104|浏览0
暂无评分
摘要
This paper addresses the problem of learning boolean functions in query and mistake-bound models in the presence of irrelevant attributes. In learning a concept, a learner may observe a great many more attributes than those that the concept depends upon, and in some sense the presence of extra, irrelevant attributes does not change the underlying concept being learned, Because of this, we are interested not only in the learnability of concept classes, but also in whether the classes can be learned by an algorithm that is attribute-efficient in that the dependence of the mistake bound (or number of queries) on the number of irrelevant attributes is low. The results presented here apply to projection and embedding-closed (p.e.c.) concept classes, We show that if a p.e.c. class is learnable attribute-efficiently in the mistake-bound model, then it is learnable in the infinite-attribute mistake-bound model as well. We show in addition how to convert any algorithm that learns a p.e.c. dass in the mistake-bound model with membership queries into an algorithm that learns the class attribute-efficiently in that model, or even in the infinite attribute version. In the membership query only model we show that learnability does not always imply attribute-efficient learnability for deterministic algorithms. However, we describe a large class of functions, including the set of monotone functions, for which learnability does imply attribute-efficient learnability in this model. (C) 1995 Academic Press, Inc.
更多
查看译文
关键词
monotone function,boolean function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要