Trace-Driven Scaling of Microservice Applications.

IEEE Access(2023)

引用 1|浏览16
暂无评分
摘要
The containerized microservices architecture is being increasingly used to build complex applications. To minimize operating costs, service providers typically rely on an auto-scaler to "right size" their infrastructure amid fluctuating workloads. The agile nature of microservice development and deployment requires an auto-scaler that does not require significant effort to derive resource allocation decisions. In this paper, we investigate reducing auto-scaler development effort along a number of dimensions. First, we focus on a technique that does not require an expert to develop a model, e.g., a queuing model or machine learning model, of the system and tweak the model as the underlying microservice application changes. Second, we explore ways to limit the number of workload patterns that need to be considered. Third, we study techniques to reduce the number of resource allocation scenarios that one has to explore before deploying the auto-scaler. To address these goals, we first analyze the workload of 24,000 real microservice applications and find that a small number of workload patterns dominate for any given application. These results suggest that auto-scaler design can be driven by this small subset of popular workload patterns thereby limiting effort. To limit the number of resource allocation scenarios explored, we develop a novel heuristic optimization technique called MOAT, which outperforms Bayesian Optimization often used for such exercises. We combine insights obtained from real microservice workloads and MOAT to realize an auto-scaler called TRIM that requires no system modeling. For each popular workload pattern identified for an application, TRIM uses MOAT to pre-compute a near minimal resource allocation that satisfies end user response time targets. These resource allocations are then used at runtime when appropriate. We validate our approach using a variety of analytical, on-premise, and public cloud systems. From our results, TRIM in consort with MOAT significantly improves the performance of the industry-standard HPA auto-scaler by achieving up to 92% fewer response time violations and up to 34% lower costs compared to using HPA in isolation.
更多
查看译文
关键词
Auto-scalers,containers,microservice architecture,resource allocation,software performance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要