The Landscape and Challenges of HPC Research and LLMs
CoRR(2024)
摘要
Recently, language models (LMs), especially large language models (LLMs),
have revolutionized the field of deep learning. Both encoder-decoder models and
prompt-based techniques have shown immense potential for natural language
processing and code-based tasks. Over the past several years, many research
labs and institutions have invested heavily in high-performance computing,
approaching or breaching exascale performance levels. In this paper, we posit
that adapting and utilizing such language model-based techniques for tasks in
high-performance computing (HPC) would be very beneficial. This study presents
our reasoning behind the aforementioned position and highlights how existing
ideas can be improved and adapted for HPC tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要