所有文章 > 正文

百篇论文纵览大型语言模型最新研究进展

作者: AI Box

时间: 2023-05-18 12:23

本文整理了 2022 年在各大顶会(ACL、EMNLP、ICLR、ICML、NeurIPS等)发表的和大型语言模型相关的论文,共计 100 篇。

文章来自RUC AI Box

公众号:RUC AI Box

知乎链接:https://zhuanlan.zhihu.com/p/613923253

导读

去年底,OpenAI 推出的 ChatGPT 在短短数月内已经风靡全球。这个基于 GPT-3.5 的大型语言模型,具备惊人的自然语言生成和理解能力,可以像人类一样进行对话、翻译、摘要等任务。由于其优秀的表现,ChatGPT 及其背后的大型语言模型迅速成为人工智能领域的热门话题,吸引了广大科研人员和开发者的关注和参与。

本文整理了 2022 年在各大顶会(ACL、EMNLP、ICLR、ICML、NeurIPS等)发表的和大型语言模型相关的论文,共计 100 篇。

一、Training【训练】

Pre-Training【预训练】

1. UL2: Unifying Language Learning Paradigms

2. Learning to Grow Pretrained Models for Efficient Transformer Training

3. Efficient Large Scale Language Modeling with Mixtures of Experts

4. Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models

5. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis

6. InCoder: A Generative Model for Code Infilling and Synthesis

7. CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code

8. CodeRetriever: A Large Scale Contrastive Pre-Training Method for Code Search

9. UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining

10. GLM-130B: An Open Bilingual Pre-trained Model

11. When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain

Instruction Tuning【指令微调】

1. What Makes Instruction Learning Hard? An Investigation and a New Challenge in a Synthetic Environment

2. InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning

3. Learning Instructions with Unlabeled Data for Zero-Shot Cross-Task Generalization

4. Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks

5. Boosting Natural Language Generation from Instructions with Meta-Learning

6. Help me write a Poem - Instruction Tuning as a Vehicle for Collaborative Poetry Writing

7. Multitask Instruction-based Prompting for Fallacy Recognition

8. Not All Tasks Are Born Equal: Understanding Zero-Shot Generalization

9. HypeR: Multitask Hyper-Prompted Training Enables Large-Scale Retrieval Generalization

二、Utilization【使用】

In-Context Learning【上下文学习】

​​1. What learning algorithm is in-context learning? Investigations with linear models

2. Ask Me Anything: A simple strategy for prompting language models

3. Large Language Models are Human-Level Prompt Engineers

4. Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks

5. kNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference

6. Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

7. Selective Annotation Makes Language Models Better Few-Shot Learners

8. Active Example Selection for In-Context Learning

9. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

10. In-Context Learning for Few-Shot Dialogue State Tracking

11.Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts

12. ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

13. Controllable Dialogue Simulation with In-context Learning

14. Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again

15. XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for Cross-lingual Text-to-SQL Semantic Parsing

16. On the Compositional Generalization Gap of In-Context Learning

17. Towards In-Context Non-Expert Evaluation of Reflection Generation for Counselling Conversations

18. Towards Few-Shot Identification of Morality Frames using In-Context Learning

Chain-of-Thought Prompting【思维链提示】

1. ReAct: Synergizing Reasoning and Acting in Language Models

2. Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning

3. Neuro-Symbolic Procedural Planning with Commonsense Prompting

4. Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought

5. PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales

6. Decomposed Prompting: A Modular Approach for Solving Complex Tasks

7. Complexity-Based Prompting for Multi-step Reasoning

8. Automatic Chain of Thought Prompting in Large Language Models

9. Compositional Semantic Parsing with Large Language Models

10. Self-Consistency Improves Chain of Thought Reasoning in Language Models

11. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

12. Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning

13. Iteratively Prompt Pre-trained Language Models for Chain of Thought

14. ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering

15. Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models

Compression【压缩】

1. Understanding and Improving Knowledge Distillation for Quantization Aware Training of Large Transformer Encoders

2. The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models

3. AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

Others【其他】

1. BBTv2: Towards a Gradient-Free Future with Large Language Models

2. Compositional Task Representations for Large Language Models

3. Just Fine-tune Twice: Selective Differential Privacy for Large Language Models

三、Application【应用】

Multi-Modal【多模态】

1. Classification via Description from Large Language Models

2. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language

3. Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training

Code【代码】

1. DocPrompting: Generating Code by Retrieving the Docs

2. Planning with Large Language Models for Code Generation

3. CodeT: Code Generation with Generated Tests

4. Language Models Can Teach Themselves to Program Better

Retrieval【检索】

1. Promptagator: Few-shot Dense Retrieval From 8 Examples

2. Recitation-Augmented Language Models

3. Generate rather than Retrieve: Large Language Models are Strong Context Generators

4. QUILL: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation

Text Generation【文本生成】

1. Generating Sequences by Learning to Self-Correct

2. RankGen: Improving Text Generation with Large Ranking Models

3. Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation

Others【其他】

1. Systematic Rectification of Language Models via Dead-end Analysis

2. Reward Design with Language Models

3. Bidirectional Language Models Are Also Few-shot Learners

4. Composing Ensembles of Pre-trained Models via Iterative Consensus

5. Binding Language Models in Symbolic Languages

6. Mind's Eye: Grounded Language Model Reasoning through Simulation

四、Analysis & Evaluation【分析与评测】

1. WikiWhy: Answering and Explaining Cause-and-Effect Questions

2. ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning

3. Quantifying Memorization Across Neural Language Models

4. Mass-Editing Memory in a Transformer

5. Multi-lingual Evaluation of Code Generation Models

6. STREET: A MULTI-TASK STRUCTURED REASONING AND EXPLANATION BENCHMARK

7. Leveraging Large Language Models for Multiple Choice Question Answering

8. Broken Neural Scaling Laws

9. Language models are multilingual chain-of-thought reasoners

10. Language Models are Realistic Tabular Data Generators

11. Task Ambiguity in Humans and Language Models

12. Discovering Latent Knowledge in Language Models Without Supervision

13. Prompting GPT-3 To Be Reliable

14. Large language models are few-shot clinical information extractors

15. How Large Language Models are Transforming Machine-Paraphrase Plagiarism

16. Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs

17. SLING: Sino Linguistic Evaluation of Large Language Models

18. A Systematic Investigation of Commonsense Knowledge in Large Language Models

19. Lexical Generalization Improves with Larger Models and Longer Training

20. What do Large Language Models Learn beyond Language?

21. Probing for Understanding of English Verb Classes and Alternations in Large Pre-trained Language Models

二维码 扫码微信阅读
推荐阅读 更多