Separating Computational and Statistical Differential Privacy (Under Plausible Assumptions)

arxiv(2022)

引用 0|浏览4
暂无评分
摘要
Computational differential privacy (CDP) is a natural relaxation of the standard notion of (statistical) differential privacy (SDP) proposed by Beimel, Nissim, and Omri (CRYPTO 2008) and Mironov, Pandey, Reingold, and Vadhan (CRYPTO 2009). In contrast to SDP, CDP only requires privacy guarantees to hold against computationally-bounded adversaries rather than computationally-unbounded statistical adversaries. Despite the question being raised explicitly in several works (e.g., Bun, Chen, and Vadhan, TCC 2016), it has remained tantalizingly open whether there is any task achievable with the CDP notion but not the SDP notion. Even a candidate such task is unknown. Indeed, it is even unclear what the truth could be! In this work, we give the first construction of a task achievable with the CDP notion but not the SDP notion. More specifically, under strong but plausible cryptographic assumptions, we construct a task for which there exists an $\varepsilon$-CDP mechanism with $\varepsilon = O(1)$ achieving $1-o(1)$ utility, but any $(\varepsilon, \delta)$-SDP mechanism, including computationally unbounded ones, that achieves a constant utility must use either a super-constant $\varepsilon$ or a non-negligible $\delta$. To prove this, we introduce a new approach for showing that a mechanism satisfies CDP: first we show that a mechanism is "private" against a certain class of decision tree adversaries, and then we use cryptographic constructions to "lift" this into privacy against computational adversaries. We believe this approach could be useful to devise further tasks separating CDP from SDP.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要