Scientifically Speaking | Is it time to worry about fake science? - Hindustan Times
close_game
close_game

Scientifically Speaking | Is it time to worry about fake science?

ByAnirban Mahapatra
Jun 07, 2023 09:47 PM IST

Generative AI can be used to write scientific papers, with references to non-existent articles. Journals and academia need to develop safeguards

There’s a phrase in science academia that is repeated often – “publish or perish.” Science advances through the publication of research findings in scientific journals. Scientists (especially at universities and academic institutions) vie with each other to publish. Postdoctoral scholars and PhD students advance their careers by publishing. Promotions, funding, and awards go to those who publish more and in reputable journals.

Generative AI tools are incredibly useful in summarising existing research findings. But they must be used with caution(Istock) PREMIUM
Generative AI tools are incredibly useful in summarising existing research findings. But they must be used with caution(Istock)

Generative Artificial Intelligence (AI) tools are incredibly useful in summarising existing research findings. But they must be used with caution since they can also generate dubious results. For example, if asked to help write a scientific paper, ChatGPT, a generative AI chatbot, can easily churn out text with results it has made up (complete with references to non-existent articles written by fictitious authors). AI researchers refer to this as “hallucination”, and it is a feature of how the AI systems were designed to generate content based on probability and not accuracy.

Although ChatGPT was only released last November, we are already seeing signs of its misuse in professions that rely on specialised knowledge. This week, attorneys in the United States risked sanctions for the use of ChatGPT in a legal brief they had filed against an airline company. The brief cited relevant court decisions that could not be found by other attorneys or the judge, because they had been made up by the chatbot. One of the attorneys on the case later regretted using “a source that has revealed itself to be unreliable.”

No doubt, high-profile cases of fraud and misconduct in science have made headlines in the past too. In recent years paper mills, which are large-scale business operations that create fraudulent or low-quality research papers for those who need them to advance their careers, have churned out spurious articles that have passed checks by journal editors and reviewers and have been published in some scientific journals. Now, generative AI tools that create text and images pose a threat to the accuracy of scientific research, because they make the creation of fake scientific papers easier, while making their detection by existing plagiarism software harder.

It is only a matter of time before we see the first AI-generated fictitious article get published in a scientific journal. Research integrity experts, quoted in a news story in Nature last week, think the lag is due to the time it takes for peer-review prior to a scientific article getting published. One of the experts quoted in the story, Jennifer Byrne at New South Wales Health Pathology and the University of Sydney in Australia, noted that “the capacity of paper mills to generate increasingly plausible raw data is just going to skyrocket with AI.”

Generative AI can be used in three different ways to write scientific papers.

One, it may be used as a tool to help craft a scientific paper with existing legitimate data and conclusions. Publishers of scientific journals have started to formulate guidelines that require disclosure of AI help in writing papers. Tools like ChatGPT cannot be listed as authors in many journals and this makes sense since AI cannot be held accountable for the contents of the papers: owning up to what is written is typically a requirement of authorship.

Two, it may be used to introduce fictitious work into research articles though the author may have no intent to deceive. As I mentioned earlier, generative AI tools have the capacity to generate fictitious text, images, and references. The attorneys in the legal suit in New York claimed to be innocent of the chatbot deception and may have, in fact, gotten more than they bargained for when they consulted it.

The third and most alarming use of generative AI may be in the wholesale creation of fake text and images with the intent to mislead. In science, reputational risk of being caught committing fraud is severe. Still, some scientists risk their careers because there are strong incentives if they're not caught. It's not hard to imagine commerical paper mills weaponising generative AI to produce papers with fake science.

If you’ve ever followed a recipe in a cookbook, you might have wondered if someone else had tried to replicate the results using the listed ingredients and instructions. One aspect of scientific peer-review that isn’t widely known outside of science is that reviewers don’t repeat the experiments written up in papers in their own labs. The entire peer-review process relies on the good faith assumption that authors have actually done what they say they have done in their submitted work.

That doesn't mean that peer-review is completely broken. There are many safeguards that check for the veracity of the content. Scientific journals also typically employ more than one reviewer to write detailed comments shared with editors. Editors can accept, reject, or ask for clarification from authors at their discretion.

Of course, the process also relies on scientists who are chosen to review papers based on their expertise to put in the effort and not phone it in. An AI system could conceivably be used to write expert reviews and make decisions on papers, taking away much of the job of the reviewer. But should it?

An analogous question is being asked with the use of AI in medicine (and in other professions in which humans are often required to make subjective decisions). Right now, most would agree that though AI systems can help physicians in clinical practice, they cannot replace physicians. I tend to think that AI systems can’t replace reviewers for similar reasons. In roles that require judgment, we still need humans with expertise.

I don’t want to paint a bleak picture of generative AI systems. These are tools which most of us will use to reduce drudgery and improve productivity. But with any transformative technology, we must be aware of pitfalls that come with widespread adoption.

In science, peer-review remains a cornerstone of the scientific process. Surveys have shown that scientists value peer-review. And the self-correcting nature of science is what led to the detection of fraud and the retraction of spurious papers I mentioned earlier. With new and improved detection tools, AI may well be used in detecting scientific fraud too.

---

Anirban Mahapatra is a scientist by training and the author of a book on COVID-19. The views expressed are personal.

Unveiling 'Elections 2024: The Big Picture', a fresh segment in HT's talk show 'The Interview with Kumkum Chadha', where leaders across the political spectrum discuss the upcoming general elections. Watch Now!

Continue reading with HT Premium Subscription

Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics
freemium
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Thursday, March 28, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On