LLM as a System Service on Mobile Devices
arxiv(2024)
摘要
Being more powerful and intrusive into user-device interactions, LLMs are
eager for on-device execution to better preserve user privacy. In this work, we
propose a new paradigm of mobile AI: LLM as a system service on mobile devices
(LLMaaS). Unlike traditional DNNs that execute in a stateless manner, such a
system service is stateful: LLMs execution often needs to maintain persistent
states (mainly KV cache) across multiple invocations. To minimize the LLM
context switching overhead under tight device memory budget, this work presents
LLMS, which decouples the memory management of app and LLM contexts with a key
idea of fine-grained, chunk-wise, globally-optimized KV cache compression and
swapping. By fully leveraging KV cache's unique characteristics, it proposes
three novel techniques: (1) Tolerance-Aware Compression: it compresses chunks
based on their measured accuracy tolerance to compression. (2) IO-Recompute
Pipelined Loading: it introduces recompute to swapping-in for acceleration. (3)
Chunk Lifecycle Management: it optimizes the memory activities of chunks with
an ahead-of-time swapping-out and an LCTRU (Least Compression-Tolerable and
Recently-Used) queue based eviction. In evaluations conducted on
well-established traces and various edge devices, reduces context
switching latency by up to 2 orders of magnitude when compared to competitive
baseline solutions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要