Accelerating Data Serialization/Deserialization Protocols with In-Network Compute

2022 IEEE/ACM International Workshop on Exascale MPI (ExaMPI)(2022)

引用 0|浏览19
暂无评分
摘要
Efficient data communication is a major goal for scalable and cost-effective use of datacenter and HPC system resources. To let applications communicate efficiently, exchanged data must be serialized at the source and deserialized at the destination. The serialization/deserialization process enables exchanging data in a language- and machine-independent format. However, serialization/deserialization overheads can negatively impact application performance. For example, a server within a microservice framework must deserialize all incoming requests before invoking the respective microservices. We show how data deserialization can be offloaded to fully programmable Smart-NICs and performed on the data path, on a per-packet basis. This solution avoids intermediate memory copies, enabling on-the-fly deserialization. We showcase our approach by offloading Google Protocol Buffers, a widely used framework to serialize/deserialize data. Our evaluation demonstrates that, by offloading data deserialization to the NIC, we can achieve up to $4. 8\mathrm{x}$ higher throughput than a single AMD Ryzen 7 CPU. We then show through microservice throughput modeling how we can improve the overall throughput by pipelining the deserialization and actual application activities with PsPIN.
更多
查看译文
关键词
SmartNICs,sPIN,deserialization,offload,RPC,microservices
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要