Should Artificial Intelligence‐based language models be allowed in developing scientific manuscripts? A debate between ChatGPT and the editors of Liver International

Liver International(2023)

引用 1|浏览1
暂无评分
摘要
The development of advanced language models, such as ChatGPT-3.5/4.0 by OpenAI, is likely to revolutionize many fields including the scientific field. In theory, descriptive reviews, grant submissions, or revision of submitted manuscripts could mostly be drafted by a conversational interface running these language models, such as ChatGPT, with little to no input/control by scientists and physicians. The benefits and weaknesses of such an approach are easy to spot: on the one hand, ChatGPT could allow faster dissemination of knowledge while also significantly reducing workload; on the other hand, one cannot wonder what added role any author would play besides fact-checking the automatically generated text. Discussions on this topic are hot with 143 papers authored in 2023 on this issue. Our Editor Wiley states in its Ethical Guidelines section that “Artificial Intelligence Generated Content (AIGC) tools—such as ChatGPT and others based on large language models (LLMs)—cannot be considered capable of initiating an original piece of research without direction by human authors. They also cannot be accountable for a published work or for research design, which is a generally held requirement of authorship (as discussed in the previous section), nor do they have legal standing or the ability to hold or assign copyright. If an author has used this kind of tool to develop any portion of a manuscript, its use must be described, transparently and in detail, in the Methods or Acknowledgements section. The author is fully responsible for the accuracy of any information provided by the tool and for correctly referencing any supporting work on which that information depends”. As Academic Editors we strongly support this statement but also wonder whether a stricter editorial policy should be applied to articles submitted to Liver International. We thought the best way to answer this question was to ask ChatGPT itself. We first asked ChatGPT to draft an editorial on this topic and received this generated text: “As an advanced language model, ChatGPT is a highly sophisticated tool that can assist researchers in the field of liver diseases to develop their research papers with ease. However, the use of ChatGPT in the scientific community is still relatively new, and there are potential risks and benefits that need to be evaluated before researchers can start relying on it for their scientific writing. On the one hand, ChatGPT has the potential to significantly enhance the quality of scientific writing. With its ability to generate highly accurate and detailed responses to various prompts, it can assist researchers in drafting manuscripts, grant proposals, and other scientific documents with ease. ChatGPT's advanced natural language processing capabilities can provide insights into the latest research in the field of liver diseases, help researchers identify key knowledge gaps, and suggest ways to improve the quality of their writing. Another potential benefit of using ChatGPT for scientific writing is its ability to save time and improve productivity. Writing a research paper on liver diseases can be a complex and time-consuming process, requiring researchers to invest a considerable amount of time and effort in drafting, editing, and revising their work. ChatGPT can significantly reduce the time required to write a research paper by generating well-structured and detailed drafts that can serve as the foundation for further refinement and improvement. However, it is important to acknowledge that with any new technology, there are potential risks and drawbacks associated with its use, especially when it comes to scientific writing. While ChatGPT has the potential to improve the efficiency and accuracy of scientific writing, there are several risks that should be taken into consideration. One significant risk is the possibility of introducing bias into scientific research. As an AI-based system, ChatGPT is only as objective as the data it is trained on. If the training data contains biases or inaccuracies, ChatGPT may inadvertently perpetuate these biases in scientific writing, leading to incorrect or misleading conclusions. Moreover, the use of ChatGPT in scientific writing may lead to the commodification of research, as researchers and publishers may prioritize quantity and speed over quality, leading to the production of large volumes of low-quality research that could flood the scientific literature. Such an approach could also hinder the development of critical thinking and creativity in scientific research, as researchers may rely too heavily on AI-based systems to generate ideas and hypotheses. Another potential risk associated with the use of ChatGPT in scientific writing is the issue of transparency and reproducibility. ChatGPT operates as a ‘black box,’ making it difficult to understand how it generates its outputs. This makes it challenging to reproduce and validate scientific findings based on ChatGPT-generated texts. Furthermore, the lack of transparency in ChatGPT's algorithms and decision-making processes raises questions about the accountability of research and potential ethical issues that may arise from its use. Therefore, it is important for researchers and publishers to exercise caution and due diligence when using ChatGPT in scientific writing. Researchers should critically evaluate the quality and reliability of the data used to train ChatGPT, and carefully consider the implications of using AI-based systems in scientific research. Moreover, publishers should take steps to ensure that ChatGPT-generated texts are clearly labelled as such and subjected to rigorous peer review to ensure their accuracy and validity.” Our opinion is that the AI-generated answer was precise in assessing the benefits and limits of its use in our field, and quite brilliantly highlighted the possible negatives of using ChatGPT-generated texts. Having strongly agreed with this incipit, we then directly asked ChatGPT whether it thought that authors should use its services to draft scientific articles. This is the AI-generated text we received in response to our question: “ChatGPT is a language model designed to generate natural language responses to input prompts, and as such, it can be a helpful tool for generating ideas or exploring different ways of expressing ideas in scientific writing. However, it is important to note that ChatGPT is not a substitute for the critical thinking and analysis that is essential for scientific writing. Authors should exercise caution when using ChatGPT for scientific writing and should not rely on it as the sole source of information or argumentation. While ChatGPT can generate coherent and grammatically correct responses, it may not always provide accurate or reliable information. Therefore, authors should carefully evaluate and fact-check the information generated by ChatGPT before incorporating it into their scientific writing. In summary, authors can use ChatGPT as a tool to generate ideas or explore different ways of expressing ideas but should not rely on it as a substitute for critical thinking, analysis, and fact-checking in scientific writing.” Once again we strongly support this conclusion and quite frankly, we could not have said it in a better way. However, it is important to note that the increasing use of AI language models, such as ChatGPT, in scientific research and publications is likely to have a significant impact on the way scientific journal editors evaluate and judge papers, particularly review articles, editorials, and letters. Instead of solely focusing on the overview of available findings, we will likely prioritize the novel and original viewpoints presented by the authors. This shift in focus can help to enrich scientific discourse by generating new perspectives and it is important to recognize that a thoughtful use of AI language models may also help in identifying potential biases or gaps in the literature, thereby promoting innovation. The authors would like to acknowledge ChatGPT (GPT-3.5) for participating in the writing of the manuscript https://chat.openai.com/chat. AA reports Grant and research supports Abbvie, Gilead; advisory board: Abbvie, Gilead, MSD, Mylan, Intercept, Sobi, Takeda. RDA: Advisory Board: AbbVie, Gilead, Takeda; Speaking and teaching: AbbVie, Gilead; Research support: AbbVie, Gilead. AF received lecture fees from Gilead, Boston Scientific, Roche and MSD, also consultancy fees from AstraZeneca, Roche, SIRTEX, AB Exact Science, Taiho and Boston Scientific. LV reports speaking fees from MSD, Gilead, AlfaSigma, AbbVie, consultancy for Gilead, Pfizer, Astra Zeneca, Novo Nordisk, Intercept pharmaceuticals, Diatech Pharmacogenetics, IONIS, Viatris, Boehringer Ingelheim, Resalis, research grants from Gilead and patents with Takeda.
更多
查看译文
关键词
scientific manuscripts,language models,artificial intelligence‐based,editors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要