DiffLex: A High-Performance, Memory-Efficient and NUMA-Aware Learned Index using Differentiated Management

PROCEEDINGS OF THE 52ND INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2023(2023)

引用 0|浏览1
暂无评分
摘要
Learned indexes that utilize machine learning models can offer significant performance advantages over traditional indexes. However, existing learned indexes suffer from space-performance tradeoffs and they cannot scale well in multiple NUMA-nodes machines. These issues limit the development of learned indexes in production environments. In this paper, we propose DiffLex, a high-performance, memory-efficient and NUMA-aware learned index. The core idea of DiffLex is to differentiate key management based on hotness. To achieve high performance, DiffLex stores newly inserted keys in sparse deltas and frequently accessed keys in a sparse hot cache. For cold keys that take up most of the storage space, however, DiffLex stores them in dense arrays to save memory costs. DiffLex also makes sparse deltas and hot cache NUMA-aware by partitioning sparse deltas and replicating hot cache across different NUMA nodes. Our evaluation shows that DiffLex outperforms the state-of-the-art ALEX by 3.88x and 1.82x for insert and search operations, respectively, while maintaining a small index size.
更多
查看译文
关键词
Index structure,learned index,NUMA-aware index
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要