Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Science
Ian Sample Science editor

Science journals ban listing of ChatGPT as co-author on papers

ChatGPT
ChatGPT can write poems, short stories, essays and even personal advice. Photograph: Hollandse Hoogte/Rex/Shutterstock

The publishers of thousands of scientific journals have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.

ChatGPT, a fluent but flaky chatbot developed by OpenAI in California, has impressed or distressed more than a million human users by rattling out poems, short stories, essays and even personal advice since its launch in November.

But while the chatbot has proved a huge source of fun – its take on how to free a peanut butter sandwich from a VCR, in the style of the King James Bible, is one notable hit – the program can also produce fake scientific abstracts that are convincing enough to fool human reviewers.

ChatGPT’s more legitimate uses in article preparation have already led to it being credited as a co-author on a handful of papers.

The sudden arrival of ChatGPT has prompted a scramble among publishers to respond. On Thursday, Holden Thorp, the editor-in-chief of the leading US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author.

“Given the frenzy that has built up around this, it’s a good idea to make it absolutely explicit that we will not permit ChatGPT to be an author or to have its text used in papers,” Thorp said.

Leading scientific journals require authors to sign a form declaring that they are accountable for their contribution to the work. Since ChatGPT cannot do this, it cannot be an author, Thorp says.

But even using ChatGPT in the preparation of a paper is problematic, he believes. ChatGPT makes plenty of errors, which could find their way into the literature, he says, and if scientists come to rely on AI programs to prepare literature reviews or summarise their findings, the proper context of the work and the deep scrutiny that results deserve could be lost. “That is the opposite direction of where we need to go,” he said.

Other publishers have made similar changes. On Tuesday, Springer-Nature, which publishes nearly 3,000 journals, updated its guidelines to state that ChatGPT cannot be listed as an author. But the publisher has not banned ChatGPT outright. The tool, and others like it, can still be used in the preparation of papers, provided full details are disclosed in the manuscript.

“The specific development that we felt very strongly that we needed to respond to was the fact that pretty much all of a sudden the tool was appearing as a co-author,” said Magdalena Skipper, the editor-in-chief of Nature.

With the right guardrails in place, Skipper believes, ChatGPT and similar AI tools could be beneficial for science, not least in levelling the playing field for non-native English speakers who could use AI programs to make the language in their papers more fluent.

Elsevier, which publishes about 2,800 journals, including Cell and the Lancet, has taken a similar stance to Springer-Nature. Its guidelines allow the use of AI tools “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions,” said Elsevier’s Andrew Davis, adding that authors must declare if and how they have used AI tools.

Michael Eisen, the editor-in-chief of eLife, said ChatGPT could not be an author, but he saw its adoption as inevitable. “I think the better question isn’t whether to allow it but how to manage the fact that it is being used,” he said. “The most important thing, for now, at least, is for authors to be very upfront about its use and to describe how it was used, and for us to be clear that in using the tool they are taking responsibility for its output.”

Sandra Wachter, a professor of technology and regulation at the University of Oxford, said: “It’s great to see publishers taking action. ChatGPT allows for corners to be cut and this is especially troubling if the suggested content is not rigorously double-checked but just assumed to be correct. This can lead to misinformation and junk science. I think many other sectors such as education, art and journalism will have to think about similar steps, because they are facing similar challenges.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.