GuaranTEE: Towards Attestable and Private ML with CCA
Workshop on Machine Learning and Systems(2024)
摘要
Machine-learning (ML) models are increasingly being deployed on edge devices
to provide a variety of services. However, their deployment is accompanied by
challenges in model privacy and auditability. Model providers want to ensure
that (i) their proprietary models are not exposed to third parties; and (ii) be
able to get attestations that their genuine models are operating on edge
devices in accordance with the service agreement with the user. Existing
measures to address these challenges have been hindered by issues such as high
overheads and limited capability (processing/secure memory) on edge devices.
In this work, we propose GuaranTEE, a framework to provide attestable private
machine learning on the edge. GuaranTEE uses Confidential Computing
Architecture (CCA), Arm's latest architectural extension that allows for the
creation and deployment of dynamic Trusted Execution Environments (TEEs) within
which models can be executed. We evaluate CCA's feasibility to deploy ML models
by developing, evaluating, and openly releasing a prototype. We also suggest
improvements to CCA to facilitate its use in protecting the entire ML
deployment pipeline on edge devices.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要