Mission Statement
Researchers in various disciplines were quick to experiment with generative AI systems when they came to prominence in late 2022 and early 2023. Chatbots were used as the objects of research, the tools used to conduct research, and even the authors of published research literature. The last of these three applications was immediately controversial, and a consensus rapidly grew that AI systems should not be credited as authors.
AI tools cannot be used in the work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works.
—H. Holden Thorp, Editor-in-Chief, Science Journals1
The justification for this stance is almost invariably that AI systems cannot be held accountable for their outputs. However, authors who solicit those outputs and use them to contribute to research manuscripts can and should be held accountable for doing so.
We believe that authors are ultimately responsible for the text generated by NLP systems and must be held accountable for inaccuracies, fallacies, or any other problems in manuscripts. We take this position because 1) NLP systems respond to prompts provided by researchers and do not proactively generate text; 2) authors can juxtapose text generated by an NLP system with other text (e.g., their own writing) or simply revise or paraphrase the generated text; and 3) authors will take credit for the text in any case.
—Hosseini, Rasmussen, and Resnik; Accountability in Research2
This is the general view of numerous academic publishing organizations, including but not limited to the Committee on Publication Ethics (COPE)3, the Council of Science Editors4, the International Committee of Medical Journal Editors (ICMJE)5, and the World Association of Medical Editors (WAME)6. The consensus extends to scientific associations, such as the Institute of Electrical and Electronics Engineers (IEEE)7, the Institute of Physics (IOP)8, and the Society of Photo-Optical Instrumentation Engineers (SPIE)9, and to publishers themselves, including Elsevier10, Frontiers11, the Multidisciplinary Digital Publishing Institute (MDPI)12, the Public Library of Science (PLoS)13, Sage14, Science15, Springer16, Taylor and Francis17, and Wiley18. These organizations also agree that if authors use AI systems to help write their manuscripts, that usage of AI must be declared in the manuscript itself.
Because NLP systems may be used in ways that may not be obvious to the reader, researchers should disclose their use of such systems and indicate which parts of the text were written or co-written by an NLP system.
—Hosseini, Rasmussen, and Resnik; Accountability in Research2
Just as plagiarism can involve the misappropriation or theft of words or ideas, NLP-generated ideas may also affect the integrity of publications. When NLP assistance has impacted the content of a publication (even in the absence of direct use of NLP-generated text), this should be disclosed.
—Hosseini, Rasmussen, and Resnik; Accountability in Research2
Generative AI has manifold implications for research, for better and for worse; the ethical and legal ramifications are too numerous and complex to list here. That researchers should at the very least declare their usage of AI, however, is a simple imperative that imposes no undue strain on the research community. Responsible researchers already declare their usage of various tools, such as scientific instruments, data capture and management software, and programming languages. A declaration of the use of AI, such as that recommended by the cited organizations, is no more taxing.
Yet, as widely adopted as policies of transparency are, they are not enforced uniformly. It is my contention that authors, reviewers, and editors alike should be as scrupulous in ensuring the transparent declaration of AI usage as they are for funding sources, conflicts of interest, and data availability.
On Academ-AI, I am documenting the examples I can find that leave me with little doubt as to their AI origin, but it is impossible to say just how extensive the failure to report AI usage is, considering that many of its outputs may be undetectable, particularly if edited by human authors.
It is possible that habitually holding authors accountable in instances of obvious AI usage will encourage authors to declare AI use even in instances when they could get away with not doing so. Just as conflicts of interest do not necessarily invalidate a researcher’s findings, but failure to declare them should raise suspicion, so the use of generative AI should not be disqualifying in and of itself, but failure to declare it should be considered a serious problem. I have further argued that requiring the declaration of AI usage or lack thereof in all cases would incentivize transparency on the part of authors.19