The role of compute nodes in privacy-aware decentralized AI

Mobile Systems, Applications, and Services(2022)

引用 0|浏览3
暂无评分
摘要
BSTRACTMobile devices generate and store voluminous data valuable for training machine learning (ML) models. Decentralized ML model training approaches eliminate the need for sharing such privacy-sensitive data with centralized entities by expecting each data owner that participates in an ML model training process to compute updates locally and share them with other entities. However, the size of state-of-the-art ML models and the computational needs for producing local updates in mobile devices prohibit the participation of mobile devices in decentralized training of such models. Split learning techniques can be combined with decentralized model training protocols to realize the involvement of mobile devices in model training while preserving the privacy of their data. Mobile devices can produce local updates by splitting the model they are training into multiple parts and delegating the processing of the computationally demanding parts to compute nodes. This work examines the impact of the number of available compute nodes and their interaction. We split ResNet101 ML model into 3,4, and 5 parts, keep the first and the last part in the data owner and assign the processing of the middle parts to compute nodes. Additionally, we analyze the training time when the compute nodes assist multiple data owners in parallel or are responsible for different model parts by forming a pipeline.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要