AutoDECK: Automated Declarative Performance Evaluation and Tuning Framework on Kubernetes

2022 IEEE 15th International Conference on Cloud Computing (CLOUD)(2022)

引用 2|浏览1
暂无评分
摘要
Containerization and application variety bring many challenges in automating evaluations for performance tuning and comparison among infrastructure choices. Due to the tightly-coupled design of benchmarks and evaluation tools, the present automated tools on Kubernetes are limited to trivial microbenchmarks and cannot be extended to complex cloudnative architectures such as microservices and serverless, which are usually managed by customized operators for setting up workload dependencies. In this paper, we propose AutoDECK, a performance evaluation framework with a fully declarative manner. The proposed framework automates configuring, deploying, evaluating, summarizing, and visualizing the benchmarking workload. It seamlessly integrates mature Kubernetes-native systems and extends multiple functionalities such as tracking the image-build pipeline, and auto-tuning. We present five use cases of evaluations and analysis through various kinds of bench-marks including microbenchmarks and HPC/AI benchmarks. The evaluation results can also differentiate characteristics such as resource usage behavior and parallelism effectiveness between different clusters. Furthermore, the results demonstrate the benefit of integrating an auto-tuning feature in the proposed framework, as shown by the 10% transferred memory bytes in the Sysbench benchmark.
更多
查看译文
关键词
AutoDECK,automated declarative performance evaluation,tuning framework,automating evaluations,performance tuning,trivial microbenchmarks,complex cloudnative architectures,performance evaluation framework,mature Kubernetes-native systems,auto-tuning feature,Sysbench benchmark,HPC-AI benchmarks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要