SupMR: Circumventing Disk and Memory Bandwidth Bottlenecks for Scale-up MapReduce

Parallel & Distributed Processing Symposium Workshops(2014)

引用 7|浏览86
暂无评分
摘要
Reading input from primary storage (i.e. the ingest phase) and aggregating results (i.e. the merge phase) are important pre- and post-processing steps in large batch computations. Unfortunately, today's data sets are so large that the ingest and merge job phases are now performance bottlenecks. In this paper, we mitigate the ingest and merge bottlenecks by leveraging the scale-up MapReduce model. We introduce an ingest chunk pipeline and a merge optimization that increases CPU utilization (50 - 100%) and job phase speedups (1.16× - 3.13×) for the ingest and merge phases. Our techniques are based on well-known algorithms and scale-out MapReduce optimizations, but applying them to a scale-up computation framework to mitigate the ingest and merge bottlenecks is novel.
更多
查看译文
关键词
applications, architectures, distributed systems, distributed applications, performance measurements,pipelines,instruction sets,parallel processing,merging,distributed applications,applications,computational modeling,distributed systems,data handling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要