Improving Restore Performance of Packed Datasets in Deduplication Systems via Reducing Persistent Fragmented Chunks.

IEEE Transactions on Parallel and Distributed Systems(2020)

引用 9|浏览15
暂无评分
摘要
Data deduplication, though being efficient for redundancy elimination in storage systems, introduces chunk fragmentation which severely decreases restore performance. Rewriting algorithms are proposed to reduce the chunk fragmentation. Typically, the backup software aggregates files into larger "tar" type files for storage. We observe that, in tar type datasets, a large number of Persistent Fragmented Chunks (PFCs) are repeatedly rewritten by state-of-the-art rewriting algorithms in every backup, which severely impacts restore performance. We found that the existence of PFCs is due to the traditional strategy of storing PFCs along with other chunks in the containers to preserve the stream locality, rendering them always stored in the containers with low utilization. We propose DePFC to reduce PFCs. DePFC identifies and removes PFCs from the containers preserving the stream locality, and groups them together, to increase the utilization of containers holding them for the subsequent backup, thus preventing them from being rewritten again. We further propose an FC Buffer to avoid mistaken rewrites of PFCs and grouping PFCs that cause restore cache thrashing together. Experimental results demonstrate that DePFC improves restore performance of state-of-the-art rewriting algorithms by 44.24-89.42 percent, while attaining comparable deduplication efficiency, and FC Buffer further improves restore performance.
更多
查看译文
关键词
Data deduplication,storage system,chunk fragmentation,restore performance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要