Detection of Adversarial Attacks against the Hybrid Convolutional Long Short-Term Memory Deep Learning Technique for Healthcare Monitoring Applications

APPLIED SCIENCES-BASEL(2023)

引用 0|浏览0
暂无评分
摘要
Deep learning (DL) models are frequently employed to extract valuable features from heterogeneous and high-dimensional healthcare data, which are used to keep track of patient well-being via healthcare monitoring systems. Essentially, the training and testing data for such models are collected by huge IoT devices that may contain noise (e.g., incorrect labels, abnormal data, and incomplete information) and may be subject to various types of adversarial attacks. Therefore, to ensure the reliability of the various Internet of Healthcare Things (IoHT) applications, the training and testing data that are required for such DL techniques should be guaranteed to be clean. This paper proposes a hybrid convolutional long short-term memory (ConvLSTM) technique to assure the reliability of IoHT monitoring applications by detecting anomalies and adversarial content in the training data used for developing DL models. Furthermore, countermeasure techniques are suggested to protect the DL models against such adversarial attacks during the training phase. An experimental evaluation using the public PhysioNet dataset demonstrates the ability of the proposed model to detect anomalous readings in the presence of adversarial attacks that were introduced in the training and testing stages. The evaluation results revealed that the model achieved an average F1 score of 97% and an accuracy of 98%, despite the introduction of adversarial attacks.
更多
查看译文
关键词
Internet of Healthcare Things (IoHT),anomaly detection,deep learning,convolutional long short-term memory (ConvLSTM),adversarial attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要