Large Language Model Adaptation for Networking
CoRR(2024)
摘要
Many networking tasks now employ deep learning (DL) to solve complex
prediction and system optimization problems. However, current design philosophy
of DL-based algorithms entails intensive engineering overhead due to the manual
design of deep neural networks (DNNs) for different networking tasks. Besides,
DNNs tend to achieve poor generalization performance on unseen data
distributions/environments.
Motivated by the recent success of large language models (LLMs), for the
first time, this work studies the LLM adaptation for networking to explore a
more sustainable design philosophy. With the massive pre-trained knowledge and
powerful inference ability, LLM can serve as the foundation model, and is
expected to achieve "one model for all" with even better performance and
stronger generalization for various tasks. In this paper, we present NetLLM,
the first LLM adaptation framework that efficiently adapts LLMs to solve
networking problems. NetLLM addresses many practical challenges in LLM
adaptation, from how to process task-specific information with LLMs, to how to
improve the efficiency of answer generation and acquiring domain knowledge for
networking. Across three networking-related use cases - viewport prediction
(VP), adaptive bitrate streaming (ABR) and cluster job scheduling (CJS), we
showcase the effectiveness of NetLLM in LLM adaptation for networking. Results
show that the adapted LLM surpasses state-of-the-art algorithms by 10.1-36.6
for VP, 14.5-36.6
generalization performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要