Flash-LLM: Enabling Low-Cost and Highly-Efficient Large Generative Model Inference With Unstructured Sparsity.

Haojun Xia, Zhen Zheng, Yuchao Li,Donglin Zhuang, Zhongzhu Zhou,Xiafei Qiu,Yong Li,Wei Lin ,Shuaiwen Leon Song

Proc. VLDB Endow.(2023)

引用 0|浏览9
暂无评分
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要