Distributed Memory Futures for Compile-Time, Deterministic-by-Default Concurrency in Distributed C++ Applications

2018 IEEE/ACM 4th International Workshop on Extreme Scale Programming Models and Middleware (ESPM2)(2018)

引用 3|浏览40
暂无评分
摘要
Futures are a widely-used abstraction for enabling deferred execution in imperative programs. Deferred execution enqueues tasks rather than explicitly blocking and waiting for them to execute. Many task-based programming models with some form of deferred execution rely on explicit parallelism that is the responsibility of the programmer. Deterministic-by-default (implicitly parallel) models instead use data effects to derive concurrency automatically, alleviating the burden of concurrency management. Both implicitly and explicitly parallel models are particularly challenging for imperative object-oriented programming. Fine-granularity parallelism across member functions or amongst data members may exist, but is often ignored. In this work, we define a general permissions model that leverages the C++ type system and move semantics to define an asynchronous programming model embedded in the C++ type system. Although a default distributed memory semantic is provided, the concurrent semantics are entirely configurable through C++ constexpr integers. Correct use of the defined semantic is verified at compile-time, allowing deterministic-by-default concurrency to be safely added to applications. Here we demonstrate the use of these ``extended futures'' for distributed memory asynchronous communication and load balancing. An MPI particle in-cell application is modified with the wrapper class using this task model, with results presented for a Haswell system up to 64 nodes.
更多
查看译文
关键词
Task analysis,Semantics,Object oriented modeling,Concurrent computing,Load management,Load modeling,Parallel processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要