Improved Learning Of K-Parities

COMPUTING AND COMBINATORICS (COCOON 2018)(2018)

引用 1|浏览39
暂无评分
摘要
We consider the problem of learning k-parities in the online mistake-bound model: given a hidden vector x is an element of {0, 1}(n) where the hamming weight of x is k and a sequence of "questions" a1, a2, . . . is an element of {0, 1}(n), where the algorithm must reply to each question with < a(i), x > (mod 2), what is the best trade-off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et al. [BGM10] by an exp(k) factor in the time complexity.Next, we consider the problem of learning k-parities in the PAC model in the presence of random classification noise of rate eta is an element of (0, 1/2). Here, we observe that even in the presence of classification noise of non-trivial rate, it is possible to learn k-parities in time better than ((n)(k/2)), whereas the current best algorithm for learning noisy k-parities, due to Grigorescu et al. [GRV11], inherently requires time ((n))(k/2) even when the noise rate is polynomially small.
更多
查看译文
关键词
Mistake-bound Model, Hidden Vector, Random Classification Noise, Grigorescu, Noiseless Setting
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要