Detection of Compromised Models Using Bayesian Optimization.

Australasian Conference on Artificial Intelligence(2019)

引用 1|浏览17
暂无评分
摘要
Modern AI is largely driven by machine learning. Recent machine learning algorithms such as deep neural networks (DNN) have become quite effective in many recognition tasks e.g., object recognition, face recognition, speech recognition, etc. Due to their effectiveness, these models are already catering to user needs in the real world. To handle the service requests from large number of users and meet round the clock demand, these models are usually hosted on cloud platforms (e.g., Microsoft Azure ML Studio). When hosting a model on the cloud, there may be security concerns. For example, during the transit of the model to the cloud, a malicious third party can alter the model or sometimes the cloud provider itself may use a lossy compression on the model to efficiently manage the server resources. We propose a method to detect such model compromises via sensitive samples. Finding the best sensitive sample boils down to an optimization problem where the sensitive sample maximizes the difference in the prediction between the original and the modified model. The optimization problem is challenging as (1) the altered model is unknown (2) we have to search a sensitive sample in high-dimensional data space and (3) the optimization problem is a non-convex problem. To overcome these challenges, we first use a variational autoencoder to transform high-dimensional data to a non-linear low-dimensional space and then uses Bayesian optimization to find the optimal sensitive sample. Our proposed method is capable of generating a sensitive sample that can detect model compromise without incurring much cost by multiple queries.
更多
查看译文
关键词
Cloud service, Sensitive sample, Bayesian optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要