Short Review on Supervised Learning Under Adversarial Label Poisoning and Clean-Label Data Poisoning

Pooya Tavllali,Vahid Behzadan

2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE)(2023)

引用 0|浏览0
暂无评分
摘要
Training under adversarial label poisoning is relatively one of the recent topics that have attracted attention in the literature of machine learning. Label poisoning is considered as one of the datapoisoning techniques under adversarial attacks. Adversarial attacks tend to change the data such that some data points are classified wrongly. Adversarial attacks are categorized as white-box or black box attacks. In white-box the attacker has partial or complete knowledge of the model. On the other hand, black box attacks have no information about the model. Data poisoning considers manipulation of training points for adversarial intents. A subcategory of this attack is label poisoning where the labels are manipulated or new data is added to the dataset. This short review, considers label poisoning attacks and finally shortly mentions other related topics that has attracted some attention when developing techniques for label poisoning.
更多
查看译文
关键词
Data poisoning,label poisoning,label flipping,label noise
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要