Learners that Leak Little Information.

arXiv: Learning(2017)

引用 23|浏览48
暂无评分
摘要
study learning algorithms that are restricted to using a small amount of information from their input sample. introduce a category of learning algorithms we term d-bit information learners, which are algorithms whose output conveys at most d bits of information on their input. A central theme in this work is that such algorithms generalize. We focus on the learning capacity of these algorithms, and prove sample complexity bounds with tight dependencies on the confidence and error parameters. also observe connections with well studied notions such as sample compression schemes, Occamu0027s razor, PAC-Bayes and differential privacy. We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information. On the other hand, we show that in the distribution-dependent setting every VC class has empirical risk minimizers that do not reveal a lot of information.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要