MultiLegalPile: A 689GB Multilingual Legal Corpus

CoRR(2023)

引用 0|浏览48
暂无评分
摘要
Large, high-quality datasets are crucial for training \acp{LLM}. However, so far, there are few datasets available for specialized critical domains such as law and the available ones are often only for the English language. We curate and release \textsc{MultiLegalPile}, a 689GB corpus in 24 languages from 17 jurisdictions. The \textsc{MultiLegalPile} corpus, which includes diverse legal data sources with varying licenses, allows for pretraining NLP models under fair use, with more permissive licenses for the Eurlex Resources and Legal mC4 subsets. We pretrain two RoBERTa models and one Longformer multilingually, and 24 monolingual models on each of the language-specific subsets and evaluate them on LEXTREME. Additionally, we evaluate the English and multilingual models on LexGLUE. Our multilingual models set a new SotA on LEXTREME and our English models on LexGLUE. We release the dataset, the trained models, and all of the code under the most open possible licenses.
更多
查看译文
关键词
multilegalpile,multilegalpile
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要