Zkllm: Zero Knowledge Proofs for Large Language Models
Annual ACM Conference on Computer and Communications Security(2024)
Abstract
The recent surge in artificial intelligence (AI), characterized by theprominence of large language models (LLMs), has ushered in fundamentaltransformations across the globe. However, alongside these advancements,concerns surrounding the legitimacy of LLMs have grown, posing legal challengesto their extensive applications. Compounding these concerns, the parameters ofLLMs are often treated as intellectual property, restricting directinvestigations. In this study, we address a fundamental challenge within the realm of AIlegislation: the need to establish the authenticity of outputs generated byLLMs. To tackle this issue, we present zkLLM, which stands as the inauguralspecialized zero-knowledge proof tailored for LLMs to the best of ourknowledge. Addressing the persistent challenge of non-arithmetic operations indeep learning, we introduce tlookup, a parallelized lookup argument designedfor non-arithmetic tensor operations in deep learning, offering a solution withno asymptotic overhead. Furthermore, leveraging the foundation of tlookup, weintroduce zkAttn, a specialized zero-knowledge proof crafted for the attentionmechanism, carefully balancing considerations of running time, memory usage,and accuracy. Empowered by our fully parallelized CUDA implementation, zkLLM emerges as asignificant stride towards achieving efficient zero-knowledge verifiablecomputations over LLMs. Remarkably, for LLMs boasting 13 billion parameters,our approach enables the generation of a correctness proof for the entireinference process in under 15 minutes. The resulting proof, compactly sized atless than 200 kB, is designed to uphold the privacy of the model parameters,ensuring no inadvertent information leakage.
MoreTranslated text
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined