Inter-Server RSS: Extending Receive Side Scaling for Inter-Server Workload Distribution

2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)(2020)

引用 5|浏览10
暂无评分
摘要
Network Function Virtualization enables operators to schedule diverse network processing workloads on a general-purpose hardware infrastructure. However, short-lived processing peaks make an efficient dimensioning of processing resources under stringent tail latency constraints challenging. To reduce dimensioning overheads, several load balancing approaches, which either adaptively steer network traffic to a group of servers or to their internal CPU cores, have separately been investigated.In this paper, we present Inter-Server RSS (isRSS), a hardware mechanism built on top of Receive Side Scaling in the network interface card, which combines intra-and inter-server load balancing. In a first step, isRSS targets a balanced utilization of processing resources by steering packet bursts to CPU cores based on per-core load feedback. If all local CPU cores are highly loaded, isRSS avoids high queueing delays by redirecting newly arriving packet bursts to other servers, which execute the same network functions, exploiting that processing peaks are unlikely to occur at all servers at the same time. Our evaluation based on real-world network traces shows that compared to Receive Side Scaling, the joint intra-and inter-server load balancing approach is able to reduce the processing capacity dimensioned for network function execution by up to 38.95% and limit packet reordering to 0.0589% while maintaining tail latencies.
更多
查看译文
关键词
network function virtualization,load balancing,receive side scaling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要