Protecting Publicly Available Data With Machine Learning Shortcuts
CoRR(2023)
摘要
Machine-learning (ML) shortcuts or spurious correlations are artifacts in
datasets that lead to very good training and test performance but severely
limit the model's generalization capability. Such shortcuts are insidious
because they go unnoticed due to good in-domain test performance. In this
paper, we explore the influence of different shortcuts and show that even
simple shortcuts are difficult to detect by explainable AI methods. We then
exploit this fact and design an approach to defend online databases against
crawlers: providers such as dating platforms, clothing manufacturers, or used
car dealers have to deal with a professionalized crawling industry that grabs
and resells data points on a large scale. We show that a deterrent can be
created by deliberately adding ML shortcuts. Such augmented datasets are then
unusable for ML use cases, which deters crawlers and the unauthorized use of
data from the internet. Using real-world data from three use cases, we show
that the proposed approach renders such collected data unusable, while the
shortcut is at the same time difficult to notice in human perception. Thus, our
proposed approach can serve as a proactive protection against illegitimate data
crawling.
更多查看译文
关键词
machine learning shortcuts,publicly available data,available data,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要