Progress-based Container Scheduling for Short-lived Applications in a Kubernetes Cluster

Yuqi Fu, Shaolun Zhang, Jose Terrero,Ying Mao, Guangya Liu,Sheng Li,Dingwen Tao

2019 IEEE International Conference on Big Data (Big Data)(2019)

引用 44|浏览86
暂无评分
摘要
In the past decade, we have envisioned enormous growth in the data generated by different sources, ranging from weather sensors and customer purchasing records to Internet of Things devices. Emerging data-driven technologies have been reforming our daily life for years, such as Amazon Personalize [1], which creates real-time individualized recommendations for customers according to multidimensional data analytics. It is, however, a challenging task to fully utilize and harness the potential of data, especially big data, due to Volume, Velocity, Variety, Variability and Value (5Vs) [2]. Most businesses thus choose to migrate their hardware demands to cloud providers, such as Amazon Web Service [3], which is powered by hundreds of thousands of servers. A cluster that builds up by a number of cloud servers is a basic management unit to provide shared computing resources. The typical structure of a cluster consists of managers and workers. When a job arrives at the cluster, as the first step, managers have to select a worker to host the incoming job. Traditionally, the selection process is based on the state of the workers, e.g., resource availability and specifications of jobs, e.g., labels, zones and regions. With respect to currently running jobs, we propose a progress based container placement scheme, named ProCon. When scheduling incoming containers, ProCon not only considers instant resource utilization on the workers but also takes into account the estimation of future resource usage. Through monitoring the progress of running jobs, ProCon balances the resource contentions across the cluster and reduces the completion time as well as the makespan. Specifically, extensive experiments prove that ProCon reduces completion time by up to 53.3% for a particular job and improves overall performance by 23.0%. Additionally, ProCon records an improvement of makespan for up to 37.4% when compared to the default scheduler available in Kubernetes.
更多
查看译文
关键词
Big Data,Deep Learning,Container,Docker,Kubernetes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要