NextLevelBERT: Investigating Masked Language Modeling with Higher-Level Representations for Long Documents
CoRR(2024)
摘要
While (large) language models have significantly improved over the last
years, they still struggle to sensibly process long sequences found, e.g., in
books, due to the quadratic scaling of the underlying attention mechanism. To
address this, we propose NextLevelBERT, a Masked Language Model operating not
on tokens, but on higher-level semantic representations in the form of text
embeddings. We pretrain NextLevelBERT to predict the vector representation of
entire masked text chunks and evaluate the effectiveness of the resulting
document vectors on three task types: 1) Semantic Textual Similarity via
zero-shot document embeddings, 2) Long document classification, 3)
Multiple-choice question answering. We find that next level Masked Language
Modeling is an effective technique to tackle long-document use cases and can
outperform much larger embedding models as long as the required level of detail
is not too high. We make model and code available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要