Artificial Intelligence versus Software Engineers: An Evidence-Based Assessment Focusing on Non-Functional Requirements

Research Square (Research Square)(2023)

引用 0|浏览3
暂无评分
摘要
Abstract The automation of Software Engineering (SE) tasks using Artificial Intelligence (AI) is growing, with AI increasingly leveraged for project management, modeling, testing, and development. Notably, ChatGPT, an AI-powered chatbot, has been introduced as a versatile tool for code writing and test plan generation. Despite the excitement around AI's potential to elevate productivity and even replace human roles in software development, solid empirical evidence remains scarce. Normally, a software engineer's solution is evaluated against a variety of non-functional requirements such as performance, efficiency, reusability, and usability, among others. This study presents an empirical exploration of the performance of software engineers versus AI on specific development tasks, using an array of quality parameters. Our aim is to enhance the interplay between humans and machines, increase the trustworthiness of AI methodologies, and identify the best performers for each task. In doing so, it also contributes to refining cooperative or human-in-the-loop workflows in the context of software engineering. The study investigates two distinct scenarios: the analysis of ChatGPT-produced code against developer-created code on Leetcode, and the comparison of automated machine learning (Auto-ML) and manual methods in the creation of a control structure for an Internet of Things (IoT) application. Our findings reveal that while software engineers excel in some scenarios, AI performs better in others. This groundbreaking empirical study helps forge a new pathway for collaborative human-machine intelligence where AI's capabilities can augment human skills in software engineering.
更多
查看译文
关键词
software engineers,artificial intelligence,assessment,evidence-based,non-functional
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要