AStore: Uniformed Adaptive Learned Index and Cache for RDMA-enabled Key-Value Store

IEEE Transactions on Knowledge and Data Engineering(2024)

引用 0|浏览19
暂无评分
摘要
Distributed key-value storage and computation are essential components of cloud services. As the demand for high-performance systems has increased significantly, a new architecture has been motivated to separate computing and storage nodes and connect them using RDMA-enabled networks. Existing RDMA-enabled systems use client-side cached indexes to reduce communication overhead and improve performance. However, such approaches could result in high server CPU contention due to heavy dynamic workloads (i.e., inserts ), and cause a large accuracy gap because of the different indexes between client-side and server-side. These drawbacks limit the performance of RDMA-enabled systems. In this paper, to deal with these issues, we introduce AStore to achieve high performance with low memory footprint. AStore employs a new uniformed architecture, utilizing an adaptive learned index as both the server-side learned index and the client-side cached index, to handle dynamic and static workloads. We propose several optimization techniques to optimize dynamic and static workload procedures and design the leaf node lock mechanism to support high concurrent access. Extensive evaluations on YCSB, LGN, and OSM datasets demonstrate that AStore achieves competitive performance on read-only workloads by up to 75.2%, 107.3% and 57.7%, as well as improving performance on write-read workloads by up to 65.7%, 108.7% and 74.3% than XStore.
更多
查看译文
关键词
Learned index,Distributed system,Key-Value store,RDMA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要