Designing fast LTL model checking algorithms for many-core GPUs

Journal of Parallel and Distributed Computing(2012)

引用 58|浏览0
暂无评分
摘要
Recent technological developments made various many-core hardware platforms widely accessible. These massively parallel architectures have been used to significantly accelerate many computation demanding tasks. In this paper, we show how the algorithms for LTL model checking can be redesigned in order to accelerate LTL model checking on many-core GPU platforms. Our detailed experimental evaluation demonstrates that using the NVIDIA CUDA technology results in a significant speedup of the verification process. Together with state space generation based on shared hash-table and DFS exploration, our CUDA accelerated model checker is the fastest among state-of-the-art shared memory model checking tools. The effective utilization of the CUDA technology, however, is quite often reduced by the costly preparation of suitable data structures and limited to small or middle-sized instances due to space limitations, which is also the case of our CUDA-aware LTL model checking solutions. Hence, we further suggest how to overcome these limitations by multi-core construction of the compact data structures and by employing multiple CUDA devices for acceleration of fine-grained communication-intensive parallel algorithms for LTL model checking.
更多
查看译文
关键词
ltl model checking algorithm,ltl model checking,cuda technology,many-core gpus,multiple cuda device,fine-grained communication-intensive parallel algorithm,compact data structure,state-of-the-art shared memory model,cuda-aware ltl model checking,cuda accelerated model checker,nvidia cuda technology result,checking tool,linear temporal logic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要