Making Sense of Using a SmartNIC to Reduce Datacenter Tax from SLO and TCO Perspectives

2023 IEEE INTERNATIONAL SYMPOSIUM ON WORKLOAD CHARACTERIZATION, IISWC(2023)

引用 0|浏览7
暂无评分
摘要
The speed of network interfaces has rapidly increased, while the performance and energy efficiency of CPUs have not, due to the demise of Dennard scaling. As a result, functions processing network packets have become responsible for a rapidly increasing portion of the datacenter tax. To tackle this problem, the industry has developed SmartNICs (SNICs) integrating conventional NICs with inexpensive and energy-efficient processors that can efficiently execute functions widely used by network-intensive datacenter applications. With such processors, the SNICs promise to reduce the total cost of ownership (TCO) for datacenters by increasing energy efficiency of servers and/or decreasing the number of expensive server CPU cores. In this paper, to make sense of using SNICs, we focus on analyzing energy efficiency of a server with an SNIC, especially under service level objective (SLO) constraints which matter for many datacenter applications. To this end, we not only measure the system-wide power consumption of a server but also devise a custom hardware setup that allows us to isolate the power consumption of the SNIC from that of the server. This helps us better understand the impact of using SNICs on energy efficiency of servers. Second, we take popular TCP/UDP, DPDK, and RDMA-based functions, and prepare them to run on an SNIC processor and a server CPU. Then, we measure maximum throughput, tail latency, system-wide energy efficiency of executing the functions on the SNIC processor and the server CPU, respectively. Lastly, based on analyses of the measurements, we make five key observations and three strategies to better use and design SmartNICs in the future.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要