Properly learning monotone functions via local correction

2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS)(2022)

引用 7|浏览4
暂无评分
摘要
We give a $2^{\tilde{O}(\sqrt{n}/\varepsilon)}$-time algorithm for properly learning monotone Boolean functions under the uniform distribution over $\{0,1\}^{n}$. Our algorithm is robust to adversarial label noise and has a running time nearly matching that of the state-of-the-art improper learning algorithm of Bshouty and Tamon (JACM 96) and an information-theoretic lower bound of Blais et al (RANDOM ’15). Prior to this work, no proper learning algorithm with running time smaller than $2^{\Omega(n)}$ was known to exist. The core of our proper learner is a local computation algorithm for sorting binary labels on a poset. Our algorithm is built on a body of work on distributed greedy graph algorithms; specifically we rely on a recent work of Ghaffari (FOCS’22), which gives an efficient algorithm for computing maximal matchings in a graph in the LCA model of Rubinfeld et al and Alon et al (ICS’II, SODA’12). The applications of our local sorting algorithm extend beyond learning on the Boolean cube: we also give a tolerant tester for Boolean functions over general posets that distinguishes functions that are $\varepsilon$/3-close to monotone from those that are $\varepsilon-$far. Previous tolerant testers for the Boolean cube only distinguished between $\varepsilon/\Omega(\sqrt{n}$)-close and $\varepsilon-$far.
更多
查看译文
关键词
proper learning,property testing,local reconstruction,monotone functions,local computation algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要