D3N: A multi-layer cache for the rest of us

2019 IEEE International Conference on Big Data (Big Data)(2019)

引用 1|浏览135
暂无评分
摘要
Current caching methods for improving the performance of big-data jobs assume high (e.g., full bi-section) bandwidth; however many enterprise data centers and co-location facilities have large network imbalances due to over-subscription and incremental networking upgrades. We describe D3N, a multi-layer cooperative caching architecture that mitigates network imbalances by caching data on the access side of each layer of a hierarchical network topology, adaptively adjusting cache sizes of each layer based on observed workload patterns and network congestion. We have added (and submitted upstream) a 2-layer D3N cache to the Ceph RADOS Gateway; read bandwidth achieves the 5GB/s speed of our SSDs, and we show that it substantially improves big-data job performance while reducing network traffic.
更多
查看译文
关键词
big-data job performance,network traffic,multilayer cache,current caching methods,big-data jobs,enterprise data centers,co-location facilities,incremental networking upgrades,caching architecture,mitigates network imbalances,caching data,hierarchical network topology,cache sizes,observed workload patterns,network congestion,2-layer,D3N cache,Ceph RADOS gateway,SSD
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要