IRIS: Interference and Resource Aware Predictive Orchestration for ML Inference Serving

2023 IEEE 16th International Conference on Cloud Computing (CLOUD)(2023)

引用 0|浏览1
暂无评分
摘要
Over the last years, the ever-growing number of Machine Learning(ML) and Artificial Intelligence(AI) applications deployed in the Cloud has led to high demands on the computing resources required for efficient processing. Multiple users deploy multiple applications on the same server node to maximize Quality of Service(QoS); however, this leads to increased interference. In addition, Cloud providers aim to minimize their operating costs by efficiently utilizing the available resources. These conflicting optimization goals form a complex paradigm where efficient scheduling is required. In this work, we present IRIS, an interference- and resource-aware predictive inference scheduling framework for ML inference serving in the cloud. We target the multi-objective problem of QoS maximization with effective CPU utilization based on Queries per Second(QPS) predictions by proposing a modelless ML-based solution and integrating it into the Kubernetes platform. Our approach is evaluated over real hardware infrastructure and a set of ML applications. Our experimental analysis shows that under various QoS constraints, the model specific interference-aware scheduler violates QoS constraints less frequently by achieving 1.8x fewer violations, on average, compared to over-provisioning and 3.1 x fewer violations compared to under-provisioning, through efficient exploitation of available CPU resources. The model-less feature is able to cause, on average, 1.5x fewer violations compared to the model-specific scheduler, while further reducing the average CPU utilization by $\approx 30{\%}$ .
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络