Frustrated with MPI+Threads? Try MPIxThreads!
EuroMPI(2024)
摘要
MPI+Threads, embodied by the MPI/OpenMP hybrid programming model, is a
parallel programming paradigm where threads are used for on-node shared-memory
parallelization and MPI is used for multi-node distributed-memory
parallelization. OpenMP provides an incremental approach to parallelize code,
while MPI, with its isolated address space and explicit messaging API, affords
straightforward paths to obtain good parallel performance. However, MPI+Threads
is not an ideal solution. Since MPI is unaware of the thread context, it cannot
be used for interthread communication. This results in duplicated efforts to
create separate and sometimes nested solutions for similar parallel tasks. In
addition, because the MPI library is required to obey message-ordering
semantics, mixing threads and MPI via MPI_THREAD_MULTIPLE can easily result in
miserable performance due to accidental serializations.
We propose a new MPI extension, MPIX Thread Communicator (threadcomm), that
allows threads to be assigned distinct MPI ranks within thread parallel
regions. The threadcomm extension combines both MPI processes and OpenMP
threads to form a unified parallel environment. We show that this MPIxThreads
(MPI Multiply Threads) paradigm allows OpenMP and MPI to work together in a
complementary way to achieve both cleaner codes and better performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要