MG-Buffer - A Read/Write-Optimized Multi-Grained Buffer Management Scheme for Database Systems.

HPCC/SmartCity/DSS(2019)

引用 1|浏览38
暂无评分
摘要
It is a common design in traditional DBMSs to use a page buffer consisting of fixed-sized pages to cache hot data. However, DBMSs have to read and cache at least one page even if one tuple is needed. As a result, there will be a number of unnecessary tuples resisting in the buffer, which will lower the efficiency of the buffer. This is even worse when the page size increases. Aiming to solve this problem, this work proposes to use a multi-grained buffer composed of a page buffer and a tuple buffer to improve the buffer efficiency. The page buffer is the same as usual, and the tuple buffer is to cache the hot or dirty tuples exactly requested by users. Following this idea, we present a new buffering scheme called MG-Buffer, which includes a so-called F-Buffer for pages and an S-Buffer for caching tuples migrated from F-Buffer. After introducing the architecture of MG-Buffer, we present the details of the operations for MG-Buffer, including migration and merging, read and write, and replacement. MG-Buffer is read-optimized because it can increase the hit ratio of the entire buffer. It is also write-optimized, because it first groups the dirty tuples cached in the S-Buffer before writing them to disks. We conduct trace-driven experiments on both magnetic disks and SSDs to evaluate MG-Buffer. The results suggest the efficiency of our proposal. Especially, MG-Buffer averagely achieves 20% higher hit ratio and reduces 20% more I/Os compared with traditional disk-based buffering schemes including LRU, 2Q, and LIRS. It also reduces 30% writes to SSD on average compared with existing SSD-aware buffering schemes like CFLRU, CFDC, and FD-Buffer.
更多
查看译文
关键词
Buffer management, Multi-grained buffer, Migration, Merge, LRU
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要