Optimizing RAM-latency dominated applications

APSys '13: Proceedings of the 4th Asia-Pacific Workshop on Systems(2013)

引用 7|浏览1
暂无评分
摘要
Many apparently CPU-limited programs are actually bottlenecked by RAM fetch latency, often because they follow pointer chains in working sets that are much bigger than the CPU's on-chip cache. For example, garbage collectors that identify live objects by tracing inter-object pointers can spend much of their time stalling due to RAM fetches. We observe that for such workloads, programmers should view RAM much as they view disk. The two situations share not just high access latency, but also a common set of approaches to coping with that latency. Relatively general-purpose techniques such as batching, sorting, and "I/O" concurrency work to hide RAM latency much as they do for disk. This paper studies several RAM-latency dominated programs and shows how we apply general-purpose approaches to hide RAM latency. The evaluation shows that these optimizations improve performance by a factor of 1.3x. Counter-intuitively, even though these programs are not limited by CPU cycles, we found that adding more cores can yield better performance.
更多
查看译文
关键词
concurrency work,common set,optimizing ram-latency,general-purpose approach,garbage collector,ram latency,high access latency,cpu cycle,better performance,cpu-limited program,general-purpose technique
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要