GPU Implementation of Finite Difference Solvers

High Performance Computational Finance(2014)

引用 31|浏览6
暂无评分
摘要
This paper discusses the implementation of one-factor and three-factor PDE models on GPUs. Both explicit and implicit time-marching methods are considered, with the latter requiring the solution of multiple tridiagonal systems of equations. Because of the small amount of data involved, one-factor models are primarily compute-limited, with a very good fraction of the peak compute capability being achieved. The key to the performance lies in the heavy use of registers and shuffle instructions for the explicit method, and a nonstandard hybrid Thomas/PCR algorithm for solving the tridiagonal systems for the implicit solver. The three-factor problems involve much more data, and hence their execution is more evenly balanced between computation and data communication to/from the main graphics memory. However, it is again possible to achieve a good fraction of the theoretical peak performance on both measures. The high performance requires particularly careful attention to coalescence in the data transfers, using local shared memory for small array transpositions, and padding to avoid shared memory bank conflicts. Computational results include comparisons to computations on Sandy Bridge and Haswell Intel Xeon processors, using both multithreading and AVX vectorisation.
更多
查看译文
关键词
finite difference methods,graphics processing units,mathematics computing,partial differential equations,AVX vectorisation,GPU implementation,Haswell Intel Xeon processor,Sandy Bridge processor,finite difference solvers,graphics processing unit,hybrid Thomas-PCR algorithm,implicit time-marching methods,multithreading,one-factor PDE model,partial differential equations,registers,shuffle instructions,three-factor PDE model,tridiagonal system,Computational finance, GPU computing, vectorisation, tridiagonal equations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要