Grus: Enabling Latency SLOs for GPU-Accelerated NFV Systems

2018 IEEE 26th International Conference on Network Protocols (ICNP)(2018)

引用 15|浏览132
暂无评分
摘要
Graphics Processing Unit (GPU) has been recently exploited as a hardware accelerator to improve the performance of Network Function Virtualization (NFV). However, GPU-accelerated NFV systems suffer from significant latency variation when multiple network functions (NFs) are co-located in the same machine, which prevents operators from supporting latency Service Level Objectives (SLOs). Existing research efforts to address this problem can only guarantee a limited number of SLOs with very low resource utilization efficiency. In this paper, we present the Grus framework to support latency SLOs in GPU-accelerated NFV systems. Grus thoroughly analyzes the sources of latency variation and proposes three design principles: (1) dynamic batch size setting is needed to bound packet batching latency in CPU; (2) a reordering mechanism for data transfer over PCI-E is required to guarantee the stalling time; and (3) maximizing concurrency in GPU is necessary to avoid NF execution waiting time. Guided by the principles, Grus consists of two logical layers including an infrastructure layer and a scheduling layer. The infrastructure layer is equipped with an in-CPU Reorder-able Worker Pool that could adjust batching size and packet transfer order, and in-GPU Controllable Concurrent Executors to provide maximized concurrency. The scheduling layer runs a heuristic algorithm to perform accurate and fast scheduling to guarantee SLOs based on our prediction models. We have implemented a prototype of Grus. Extensive evaluations demonstrate that Grus can significantly reduce latency variation and satisfy 4.5 × more SLO terms than state-of-the-art solutions.
更多
查看译文
关键词
NFV,GPU based acceleration,Service Level Objectives
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要