Can the Utility of Anonymized Data be Used for Privacy Breaches?

ACM Transactions on Knowledge Discovery from Data (TKDD)(2011)

引用 101|浏览60
暂无评分
摘要
Group based anonymization is the most widely studied approach for privacy-preserving data publishing. Privacy models/definitions using group based anonymization includes k-anonymity, l-diversity, and t-closeness, to name a few. The goal of this article is to raise a fundamental issue regarding the privacy exposure of the approaches using group based anonymization. This has been overlooked in the past. The group based anonymization approach by bucketization basically hides each individual record behind a group to preserve data privacy. If not properly anonymized, patterns can actually be derived from the published data and be used by an adversary to breach individual privacy. For example, from the medical records released, if patterns such as that people from certain countries rarely suffer from some disease can be derived, then the information can be used to imply linkage of other people in an anonymized group with this disease with higher likelihood. We call the derived patterns from the published data the foreground knowledge. This is in contrast to the background knowledge that the adversary may obtain from other channels, as studied in some previous work. Finally, our experimental results show such an attack is realistic in the privacy benchmark dataset under the traditional group based anonymization approach.
更多
查看译文
关键词
data publishing,privacy-preserving data publishing,anonymization approach,anonymized data,data privacy,privacy model,privacy preservation,published data,traditional group,privacy benchmark dataset,anonymized group,k- anonymity,privacy exposure,individual privacy,privacy breaches,l-diversity,medical records
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要