LAraBench: Benchmarking Arabic AI with Large Language Models
CoRR(2023)
摘要
Recent advancements in Large Language Models (LLMs) have significantly
influenced the landscape of language and speech research. Despite this
progress, these models lack specific benchmarking against state-of-the-art
(SOTA) models tailored to particular languages and tasks. LAraBench addresses
this gap for Arabic Natural Language Processing (NLP) and Speech Processing
tasks, including sequence tagging and content classification across different
domains. We utilized models such as GPT-3.5-turbo, GPT-4, BLOOMZ,
Jais-13b-chat, Whisper, and USM, employing zero and few-shot learning
techniques to tackle 33 distinct tasks across 61 publicly available datasets.
This involved 98 experimental setups, encompassing 296K data points, 46 hours
of speech, and 30 sentences for Text-to-Speech (TTS). This effort resulted in
330+ sets of experiments. Our analysis focused on measuring the performance gap
between SOTA models and LLMs. The overarching trend observed was that SOTA
models generally outperformed LLMs in zero-shot learning, with a few
exceptions. Notably, larger computational models with few-shot learning
techniques managed to reduce these performance gaps. Our findings provide
valuable insights into the applicability of LLMs for Arabic NLP and speech
processing tasks.
更多查看译文
关键词
arabic ai,language
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要