Enhancing the Configuration Tuning Pipeline of Large-Scale Distributed Applications Using Large Language Models (Idea Paper)

ICPE '23 Companion: Companion of the 2023 ACM/SPEC International Conference on Performance Engineering(2023)

引用 2|浏览18
暂无评分
摘要
The performance of distributed applications implemented using microservice architecture depends heavily on the configuration of various parameters, which are hard to tune due to large configuration search space and inter-dependence of parameters. While the information in product manuals and technical documents guides the tuning process, manual collection of meta-data for all application parameters is laborious and not scalable. Prior works have largely overlooked the automated use of product manuals, technical documents and source code for extracting such meta-data. In the current work, we propose using large language models for automated meta-data extraction and enhancing the configuration tuning pipeline. We further ideate on building an in-house knowledge system using experimental data to learn important parameters in configuration tuning using historical data on parameter dependence, workload statistics, performance metrics and resource utilization. We expect productionizing the proposed system will reduce the total time and experimental iterations required for configuration tuning in new applications, saving an organization both developer time and money.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要