Secure Convolutional Neural Network-Based Internet-of-Healthcare Applications

IEEE Access(2023)

引用 1|浏览4
暂无评分
摘要
Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by $\gamma =0.3$ with a maximum loss of accuracy tolerated at 2%.
更多
查看译文
关键词
Medical services,Internet of Medical Things,Security,Convolutional neural networks,Biological system modeling,Privacy,Computational modeling,Generative adversarial networks,internet of healthcare,medical data,adversarial attacks,security and privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要