GUARDIAN: A Hardware-Assisted Distributed Framework to Enhance Deep Learning Security

IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS(2023)

引用 1|浏览3
暂无评分
摘要
The ubiquity of artificial intelligence (AI) has led to its extensive research and application in various fields, such as computer vision, natural language processing, and medical image analysis. However, responsible AI faces severe security challenges, including the leakage of pretrained models and valuable training data. The existing solutions adopt new algorithm designs (such as federated learning) or cryptography (such as homomorphic encryption) to prevent possible security vulnerabilities. We observe that hardware-assisted trusted execution environments (TEEs) can further improve machine learning responsibility. Intel Software Guard Extension (SGX) is a popular, trusted execution hardware that enables users’ programs to run in an untrusted execution environment, such as a malicious operating system, but ensures the confidentiality and integrity of data. Therefore, we have designed GUARDIAN, a hardware-assisted secure machine learning training framework that protects data security during the training process. We have analyzed the typical characteristics of machine learning applications and characterized GUARDIAN through extensive experiments. Our findings demonstrate that introducing security guarantees causes performance degradation, which provides a feasible optimization direction in the near future.
更多
查看译文
关键词
security,deep learning,framework,hardware-assisted
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要