Towards QoS-Aware and Resource-Efficient GPU Microservices Based on Spatial Multitasking GPUs In Datacenters

arxiv(2020)

引用 0|浏览103
暂无评分
摘要
While prior researches focus on CPU-based microservices, they are not applicable for GPU-based microservices due to the different contention patterns. It is challenging to optimize the resource utilization while guaranteeing the QoS for GPU microservices. We find that the overhead is caused by inter microservice communication, GPU resource contention and imbalanced throughput within microservice pipeline. We propose Camelot, a runtime system that manages GPU micorservices considering the above factors. In Camelot, a global memory-based communication mechanism enables onsite data sharing that significantly reduces the end-to-end latencies of user queries. We also propose two contention aware resource allocation policies that either maximize the peak supported service load or minimize the resource usage at low load while ensuring the required QoS. The two policies consider the microservice pipeline effect and the runtime GPU resource contention when allocating resources for the microservices. Compared with state-of-the-art work, Camelot increases the supported peak load by up to 64.5% with limited GPUs, and reduces 35% resource usage at low load while achieving the desired 99%-ile latency target.
更多
查看译文
关键词
spatial multitasking gpus,qos-aware,resource-efficient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要