Opportunities for Optimizing the Container Runtime

2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)(2022)

引用 1|浏览11
暂无评分
摘要
Container-based virtualization provides lightweight mechanisms for process isolation and resource control that are essential for maintaining a high degree of multi-tenancy in Function-as-a-Service (FaaS) platforms, where compute functions are instantiated on-demand and exist only as long as their exe-cution is active. This model is especially advantageous for Edge computing environments, where hardware resources are limited due to physical space constraints. Despite their many advantages, state-of-the-art container runtimes still suffer from startup delays of several hundred milliseconds. This delay adversely impacts user experience for existing human-in-the-loop applications and quickly erodes the low latency response times required by emerging machine-in-the-loop IoT and Edge computing applications utilizing FaaS. In turn, it causes developers of these applications to employ unsanctioned workarounds that artificially extend the lifetime of their functions, resulting in wasted platform resources. In this paper, we provide an exploration of the cause of this startup delay and insight on how container-based virtualization might be made more efficient for FaaS scenarios at the Edge. Our results show that a small number of container startup operations account for the majority of cold start time, that several of these operations have room for improvement, and that startup time is largely bound by the underlying operating system mechanisms that are the building blocks for containers. We draw on our detailed analysis to provide guidance toward developing a container runtime for Edge computing environments and demonstrate how making a few key improvements to the container creation process can lead to a 20 % reduction in cold start time.
更多
查看译文
关键词
Containers,Edge Computing,Runtime,Serverless,FaaS
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要