A Neural Attack Model for Cracking Passwords in Adversarial Environments

2019 IEEE/CIC International Conference on Communications in China (ICCC)(2019)

引用 1|浏览39
暂无评分
摘要
In many scenarios, one has to enter her text or graphical password in a public area, such as unlocking the smartphone on the street, and entering the password when she pays with a debit card in a shopping mall. However, the environment where the password is entered may be adversarial as it is almost impossible to prevent adversaries from premeditated installation of surveillance and/or eavesdropping equipment in public areas. In this work, we investigate password security in such extreme adversarial environments in which every single interaction between humans (provers) and input terminals (verifiers) is transparent to the attacker. We first present a neural network-based attack model, which consists of a feature extraction model and a prediction model. Experimental results show that the neural model attains an accuracy of more than 80% in password prediction in three real-world authentication systems. We also propose a risk alert system based on the attack model. It can issue a timely warning notice when the password in use is at high security risk.
更多
查看译文
关键词
debit card,shopping mall,premeditated installation,public area,password security,neural network-based attack model,feature extraction model,prediction model,neural model,password prediction,neural attack model,cracking passwords,adversarial environments,risk alert system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要