Many people are busy experimenting with chatbots in the hope that generative artificial intelligence (AI) can improve their daily lives. Scientists, smart as they are, are already a few steps ahead. As we report, 10% or more of abstracts for articles in scientific journals now appear to be written at least in part by large language models. In fields such as computer science, that figure rises to 20%. Among Chinese computer scientists, it’s a third.
Some see this enthusiastic adoption as a mistake. They fear that large numbers of poor-quality papers will introduce bias, encourage plagiarism, and clog the machinery of scientific publishing. Some journals, including the Science family, impose onerous disclosure requirements on the use of LLMs. Such efforts are futile and misguided. LLMs cannot be easily policed. Even if they could, many scientists believe that their use yields real benefits.
View full image
Researchers aren’t just dedicated to lab work or coming up with big ideas. They face big demands on their time, from writing papers and teaching classes to filling out endless grant applications. LLMs help by speeding up paper writing, freeing up scientists to develop new ideas, collaborate, or check for errors in their work.
The technology could also help create a level playing field that favors native English speakers, since many of the prestigious journals are in their language. LLMs could help those who don’t speak the language well translate and edit their text. Thanks to LLMs, scientists around the world should be able to disseminate their findings more easily and be judged on the brilliance of their ideas and the ingenuity of their research, rather than on their ability to avoid casual modifiers.
As with any technology, there are concerns. Because LLMs make it easier to produce professional-sounding text, they also make it easier to generate fake scientific papers. Science received 10,444 submissions last year, 83% of which were rejected before peer review. Some of these were undoubtedly AI-generated fantasies.
LLMs could also, through their words, export the cultural environment in which they were trained. Their lack of imagination could inadvertently encourage plagiarism, where they directly copy people’s work. “Hallucinations” that are clearly false to experts but perfectly plausible to everyone else could also find their way into the text. Most worrying of all, writing can be an integral part of the research process, helping researchers clarify and formulate their own ideas. An excessive reliance on LLMs could therefore impoverish science.
Restricting the use of LLMs is not the way to deal with these problems. In the future, they will rapidly become more ubiquitous and powerful. They are already embedded in word processors and other software, and will soon be as common as spell checkers. Researchers tell us in surveys that they see the benefits of generative AI not just for writing papers, but also for coding and performing administrative tasks. And crucially, their use is not easily detectable. While journals could impose all the onerous disclosure requirements they wanted, it would not help, because they would have no way of seeing when their rules have been broken. Journals like Science should eliminate detailed disclosures for the use of LLMs as writing tools, beyond a simple acknowledgement.
Science already has many defenses against fabrication and plagiarism. In a world where the cost of producing words plummets to zero, these need to be strengthened even more. Peer review, for example, will become even more important in a gen-ai world. It needs to be strengthened accordingly, perhaps by paying reviewers for the time they sacrifice to check papers. There should also be more incentive for researchers to replicate experiments. University hiring and promotion committees should ensure that scientists are rewarded based on the quality of their work and the amount of new insights they generate. Limit the potential for abuse, and scientists have much to gain from their LLM writers.
© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com