Artificial intelligence: why its indiscriminate use is a danger to the production of knowledge

In an article published in Nature Astronomy, SISSA scientist Roberto Trotta warns against “AI scientific junk”
Immagine
Trotta

“If the difficult and demanding process of learning to think like a scientist is replaced by a simple command to ChatGPT, critical thinking, depth of analysis, and even the very skills that allow scientists to supervise the work of AI will be greatly reduced. Young researchers risk becoming little more than prompt engineers: people who know only how to ask AI questions in the most effective way possible. And this, for the research world, is a problem.”

This is the view expressed by Roberto Trotta, Professor of Theoretical Physics at the International School for Advanced Studies (SISSA) in Trieste, in a commentary recently published in Nature Astronomy, in which he explores the growing use of artificial intelligence in the scientific process. This is a trend that, according to Trotta, is far from free of critical issues for academia.

An ever-expanding use of AI in research

In the research world, artificial intelligence tools are increasingly being used for a variety of tasks that until recently were considered essentially human: from formulating research questions to writing grant proposals, from data interpretation to drafting scientific papers. Indeed, as Trotta notes in his Nature Astronomy article, a virtual research assistant has already demonstrated the ability to generate hundreds of fabricated yet plausible scientific papers in a single afternoon. One of these was accepted at Agents4Science 2025, the first conference in which AI is listed as the author of all contributions.

Between lack of originality and hallucinations

The issues at stake, Trotta explains, are highly delicate and concern not only the abilities of today’s researchers — and especially those of tomorrow — but also scientific production itself. One example is AI’s limited ability to generate genuinely original content rather than merely derivative material. Added to this are so-called “hallucinations,” that is, invented content presented as fact — a problem that continues to emerge even in the most recent AI models. Errors of this kind, Trotta warns, will become increasingly difficult to detect. There is also the issue of the lack of transparency in the logical processes by which AI arrives at its results, with the risk of producing evidence that is ever more opaque because it is hard to reproduce. An “AI junk science” that threatens to flood academia.

AI that produces, AI that checks

And the problems do not end there. Thanks to the power of AI tools, the number of scientific articles produced each year is set to grow even further, increasing the burden on a publishing system that is already struggling to cope with the enormous volume of submissions. With the risk, as Nobel Prize–winning chemist Venki Ramakrishnan observed, that “in the end, these articles will all be written by one AI agent, and then another AI agent will read them, analyze them, and produce a summary for human beings.”

The necessary human touch in science

To avoid “this AI arms race,” in which students, scientists and institutions risk becoming trapped — also because of the accelerating pace of a process that seems to know no pause — Trotta calls in his commentary for the urgent need for a broad debate involving academia across its many disciplines. The human touch in science, the expert emphasizes, is indispensable. Without it, Trotta argues, the advancement of knowledge would be compromised in its very essence.

The article in Nature Astromony

 

PRESS RELEASE (193.35 KB)