First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models
arxiv(2023)
摘要
Many NLP researchers are experiencing an existential crisis triggered by the
astonishing success of ChatGPT and other systems based on large language models
(LLMs). After such a disruptive change to our understanding of the field, what
is left to do? Taking a historical lens, we look for guidance from the first
era of LLMs, which began in 2005 with large n-gram models for machine
translation (MT). We identify durable lessons from the first era, and more
importantly, we identify evergreen problems where NLP researchers can continue
to make meaningful contributions in areas where LLMs are ascendant. We argue
that disparities in scale are transient and researchers can work to reduce
them; that data, rather than hardware, is still a bottleneck for many
applications; that meaningful realistic evaluation is still an open problem;
and that there is still room for speculative approaches.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要