Boosting the effective performance of massively parallel tensor network state algorithms on hybrid CPU-GPU based architectures via non-Abelian symmetries

arXiv (Cornell University)(2023)

引用 0|浏览0
暂无评分
摘要
We present novel algorithmic solutions together with implementation details utilizing non-Abelian symmetries in order to boost the current limits of tensor network state algorithms on high performance computing infrastructure. In our in-house developed hybrid CPU-multiGPU solution scheduling is decentralized, threads are autonomous and inter-thread communications are solely limited to interactions with globally visible lock-free constructs. Our custom tailored virtual memory management ensures data is produced with high spatial locality, which together with the use of specific sequences of strided batched matrix operations translates to significantly higher overall throughput. In order to lower IO overhead, an adaptive buffering technique is used to dynamically match the level of data abstraction, at which cache repositories are built and reused, to system resources. The non-Abelian symmetry related tensor algebra based on Wigner-Eckhart theorem is fully detached from the conventional tensor network layer, thus massively parallel matrix and tensor operations can be performed without additional overheads. Altogether, we have achieved an order of magnitude increase in performance with respect to results reported in arXiv:2305.05581 in terms of computational complexity and at the same time a factor of three to six in the actual performance measured in TFLOPS. Benchmark results are presented on Hilbert space dimensions up to $2.88\times10^{36}$ obtained via large-scale SU(2) spin adapted density matrix renormalization group simulations on selected strongly correlated molecular systems. These demonstrate the utilization of NVIDIA's highly specialized tensor cores, leading to performance around 110 TFLOPS on a single node supplied with eight NVIDIA A100 devices. In comparison to U(1) implementations with matching accuracy, our solution has an estimated effective performance of 250-500 TFLOPS.
更多
查看译文
关键词
parallel tensor network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要