Medical research article gets published in an Elsevier journal with irrelevant ChatGPT feedback text inside it


by Dr. Piyush Mathur


Careless usage of Generative Artificial Intelligence (GAI) models appears to be on the rise. A particularly high-profile instance of GAI-related (human) negligence was widely reported toward the end of last year; it involved Donald Trump’s former lawyer, Michael D. Cohen. As The New York Times (December 29, 2023) would put it, Cohen had admitted to have mistakenly given ‘his lawyer bogus legal citations concocted by the artificial intelligence program Google Bard’—citations that had been used ‘in a motion submitted to a federal judge’.

These types of instances though have deeper implications if and when they are detected in academic publications—because they put several question marks, say, on the integrity of the research itself, aside from the attention span, focus, and honesty of the publication outlet’s review process and editorial management. Sure enough, the apparition of plagiarism-related suspicion will always haunt a publication that may have happened to retain some irrelevant GAI verbiage—even though it is a certainty that no researcher who may have heard of GAI models would preclude their usage for the purposes of research and writing.

Well, the case in point is an article published in Radiology Case Reports 19 (2024) 2106-2111. Titled ‘Successful management of an Iatrogenic portal vein and hepatic artery injury in a 4-month-old female patient: A case report and literature review’, this article was flagged earlier today (March 17) on LinkedIn by Dr. Simon Chesterman, who holds several important positions including that of Principal Researcher, Office of the UN Secretary-General's Envoy on Technology.

In his LinkedIn post, Dr. Chesterman highlighted a chunk of text that occurs right before the article’s subsection titled ‘Conclusion’; the chunk reads exactly as follows (typo included):

In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient’s medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries.

The foregoing is a partial screenshot of the LinkedIn post made by Dr. Simon Chesterman on March 17, 2024; the post highlights a chunk of irrelevant ChatGPT feedback that got retained in the article published in Elsevier’s Radiology Case Reports. (Screenshot credit: Dr. Simon Chesterman)

As one can see, most of the foregoing is a ChatGPT-generated response to the article’s authors’ quest for some data. Not only did these researchers (eight in all)—who are, variously, from US-based Harvard Medical School and Israel-based Hadassah Medical Center as well as Hebrew University—fail to take care of that left-out chunk of text produced by ChatGPT, but the journal’s reviewers and proofreaders also failed to notice it (somehow!). This does leave the question in the reader’s mind about how attentive the reviewers would have been to the rest of this submission.

Dr. Chesterman himself took a light note of it in his LinkedIn post, quipping (with the ellipses), ‘I’m starting to think that this is ChatGPT’s way of getting back at researchers for not listing it as a coauthor…’, before pointing out that the article has continued to remain online without any post-publication editorial correction yet! (Thoughtfox has uploaded that version in its link earlier in this report.)

The foregoing is a partial screenshot of the LinkedIn post made by Dr. Simon Chesterman on March 17, 2024; the post highlights a chunk of irrelevant ChatGPT feedback that got retained in the article published in Elsevier’s Radiology Case Reports. (Screenshot credit: Dr. Piyush Mathur)

Interestingly, a mere four days ago, Dr. Chesterman had highlighted another situation like the above in relation to another article published in another Elsevier journal—Surfaces and Interfaces (Volume 46, March 2024, 104081). This article’s ‘Introduction’ retains, at the front of its starting sentence itself, verbiage suggesting that the whole section would have been written by ChatGPT! See below the partial screenshot of the screenshot of that section that Dr. Chesterman had posted on his LinkedIn profile page:

A partial screenshot of a journal article’s ‘Introduction’ that retains ChatGPT’s irrelevant verbiage suggesting that the section has been written by ChatGPT. (Screenshot credit: Dr. Simon Chesterman)

That was all of four days ago, but Surfaces and Interfaces has not done anything yet to edit out that irrelevant ChatGPT crumb from its aforementioned publication!

These GAI goof-ups, however, are only one more reflection upon Elsevier publications’ poor quality. There is quite a history of allegations of predatory practices and shoddy quality control against Elsevier; a relatively recent controversy (February 2023) involved its International Journal of Hydrogen Energy, which had rejected a submission citing (among other things) the fact that it had referred to only four previous articles from that journal.

In 2021, Elsevier had to retract a whole book, The Periodic Table: Nature’s Building Blocks: An Introduction to the Naturally Occurring Elements, Their Origins and Their Uses, which had plagiarised heavily from Wikipedia.


Dr. Piyush Mathur is a Research Scholar at Ronin Institute, and the author of Technological forms and ecological communication: A theoretical heuristic (Lexington Books, 2017). You may post your comments on this article using the form at the bottom; if you wish to send your inputs to Thoughtfox, click here.

Previous
Previous

India’s ‘Men Welfare Trust’ pleads the country’s Election Commission to reject political parties’ discrimination against male voters

Next
Next

So, what is on the agenda for Internet Society for AD 2024?