Interference-Aware Workload Placement for Improving Latency Distribution of Converged HPC/Big Data Cloud Infrastructures.

International Conference / Workshop on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS)(2021)

引用 2|浏览4
暂无评分
摘要
Recently, High Performance, Big Data, and Cloud Computing worlds tend to converge in terms of workload deployment with containerization technology acting as an enabler towards this direction. In such cases of application diversity and multi-tenancy, a universal scheduler able to satisfy the end-user needs for seamless, yet, efficient application deployment is required. While Kubernetes container orchestrator seems to be the answer that enables application-agnostic deployment, it still depends highly on coarse system metrics for its scheduling policies, thus, neglecting the performance degradation due to resource contention in the underlying system. In this paper, we design and implement an interference-aware modular framework, able to balance incoming workload based on low-level metrics monitoring. We evaluate our proposed solution over different workload mixes and co-location scenarios showing that against the state-of-art, but interference unaware Kubernetes scheduler the proposed framework significantly improves the latency distribution of the converged cloud infrastructure, improving median latency up to 27% and reducing standard deviation up to 25%.
更多
查看译文
关键词
Resource management,Kubernetes,Interference-aware,High-Performance Computing,Cloud computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络