On Meta-Prompting
CoRR(2023)
摘要
Certain statistical models are capable of interpreting input strings as
instructions, or prompts, and carry out tasks based on them. Many approaches to
prompting and pre-training these models involve the automated generation of
these prompts. We call these approaches meta-prompting, or prompting to obtain
prompts. We propose a theoretical framework based on category theory to
generalize and describe them. This framework is flexible enough to account for
LLM stochasticity; and allows us to obtain formal results around task
agnosticity and equivalence of various meta-prompting approaches. We experiment
with meta-prompting in two active areas of model research: creativity and
ideation. We find that user preference favors (p < 0.01) the prompts generated
under meta-prompting, as well as their corresponding outputs, over a series of
hardcoded baseline prompts that include the original task prompt. Using our
framework, we argue that meta-prompting is more effective than basic prompting
at generating desirable outputs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要