Cost-Performance Optimization for Processing Low-Resource Language Tasks Using Commercial LLMs
arxiv(2024)
摘要
Large Language Models (LLMs) exhibit impressive zero/few-shot inference and
generation quality for high-resource languages(HRLs). A few of them have been
trained in low-resource languages (LRLs) and give decent performance. Owing to
the prohibitive costs of training LLMs, they are usually used as a network
service, with the client charged by the count of input and output tokens. The
number of tokens strongly depends on the script and language, as well as the
LLM's sub-word vocabulary. We show that LRLs are at a pricing disadvantage,
because the well-known LLMs produce more tokens for LRLs than HRLs. This is
because most currently popular LLMs are optimized for HRL vocabularies. Our
objective is to level the playing field: reduce the cost of processing LRLs in
contemporary LLMs while ensuring that predictive and generative qualities are
not compromised. As means to reduce the number of tokens processed by the LLM,
we consider code-mixing, translation, and transliteration of LRLs to HRLs. We
perform an extensive study using the IndicXTREME dataset, covering 15 Indian
languages, while using GPT-4 (one of the costliest LLM services released so
far) as a commercial LLM. We observe and analyze interesting patterns involving
token count, cost,and quality across a multitude of languages and tasks. We
show that choosing the best policy to interact with the LLM can reduce cost by
90
with the LLM in the original LRL.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要