On the Entropy Loss and Gap of Condensers

Nir Aviv, Amnon Ta-Shma

ACM Transactions on Computation Theory(2019)

引用 3|浏览45
暂无评分
摘要
Many algorithms are proven to work under the assumption that they have access to a source of random, uniformly distributed bits. However, in practice, sources of randomness are often imperfect, giving n random bits that have only k < n min-entropy. The value n−k is called the entropy gap of the source. Randomness condensers are hash functions that hash any such source to a shorter source with reduced entropy gap g. The goal is to lose as little entropy as possible in this process. Condensers also have an error parameter ϵ and use a small seed of uniformly distributed bits whose length we desire to minimize as well. In this work, we study the exact dependencies between the different parameters of seeded randomness condensers. We obtain a non-explicit upper bound, showing the existence of condensers with entropy loss log (1+log 1/ϵ / g) + O(1) and seed length log (n−k / ϵ g) + O(1). In particular, this implies the existence of condensers with O(log 1 / ϵ) entropy gap and constant entropy loss. This extends (with slightly improved parameters) the non-explicit upper bound for condensers presented in the work of Dodis et al. (2014), which gives condensers with entropy loss at least log log 1 / ϵ. We also give a non-explicit upper bound for lossless condensers, which have entropy gap g ≥ log 1 / ϵ / ϵ + O(1) and seed length log (n−k/ ϵ2 g) + O(1). Furthermore, we address an open question raised in (Dodis et al. 2014), where Dodis et al. showed an explicit construction of condensers with constant gap and O(log log 1/ ϵ) loss, using seed length O(n log 1 / ϵ). In the same article they improve the seed length to O(k log k) and ask whether it can be further improved. In this work, we reduce the seed length of their construction to O(log (n / ϵ)log (k / ϵ)) by a simple concatenation. In the analysis, we use and prove a tight equivalence between condensers and extractors with multiplicative error. We note that a similar, but non-tight, equivalence was already proven by Dodis et al. (Dodis et al. 2014) using a weaker variant of extractors called unpredictability extractors. We also remark that this equivalence underlies the work of Ben-Aroya et al. (Ben-Aroya et al. 2016) and later work on explicit two-source extractors, and we believe it is interesting in its own right.
更多
查看译文
关键词
Randomness extractors, entropy gap, entropy loss, key derivation, randomness condensers, unpredictability extractors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要