MOTUS: How Quantized Parameters Improve Protection of Model and Its Inference Input

Hiromasa Kitai,Naoto Yanai,Kazuki Iwahana, Masataka Tatsumi, Jason Paucl Cruz

Innovative Security Solutions for Information Technology and Communications(2023)

引用 0|浏览10
暂无评分
摘要
Protecting a machine learning model and its inference inputs with secure computation is important for providing services with a valuable model. In this paper, we discuss how a model’s parameter quantization works to protect the model and its inference inputs. To this end, we present an investigational protocol, MOTUS, based on ternary neural networks whose parameters are ternarized. Through extensive experiments with MOTUS, we found three key insights. First, ternary neural networks can avoid accuracy deterioration due to modulo operations of secure computation. Second, the increment of model parameter candidates significantly improves accuracy more than an existing technique for accuracy improvement, i.e., batch normalization. Third, protecting both a model and inference inputs reduces inference throughput four to seven times to provide the same level of accuracy compared with existing protocols protecting only inference inputs. Our source code is publicly available via GitHub.
更多
查看译文
关键词
quantized parameters,motus,model,protection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要