Empirical Evaluation of ChatGPT on Requirements Information Retrieval Under Zero-Shot Setting

arXiv (Cornell University)(2023)

引用 0|浏览3
暂无评分
摘要
Recently, various illustrative examples have shown the impressive ability of generative large language models (LLMs) to perform NLP related tasks. ChatGPT undoubtedly is the most representative model. We empirically evaluate ChatGPT’s performance on requirements information retrieval (IR) tasks to derive insights into designing or developing more effective requirements retrieval methods or tools based on generative LLMs. We design an evaluation framework considering four different combinations of two popular IR tasks and two common artifact types. Under zero-shot setting, evaluation results reveal ChatGPT’s promising ability to retrieve requirements relevant information (high recall) and limited ability to retrieve more specific requirements information (low precision). Our evaluation of ChatGPT on requirements IR under zero-shot setting provides preliminary evidence for designing or developing more effective requirements IR methods or tools based on generative LLMs.
更多
查看译文
关键词
ChatGPT,Requirements Information Retrieval,Empirical Evaluation,Natural Language Processing for Requirements Engineering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要