The rapid development of generative AI is reshaping scientific communication, particularly in medicine and public health. Since the release of ChatGPT in 2022, Large Language Models have become widely accessible, supporting manuscript editing, statistical analysis, and rapid evidence synthesis. However, this surge in AI-generated content raises concerns about the quality, reliability, and ethical implications of scientific publishing. Increased reliance on AI-driven authoring tools could exacerbate an “infodemic”—an overwhelming flood of potentially unreliable or misleading information. This risk is exacerbated by the prevailing “publish or perish” culture, which prioritizes publication volume over meaningful contributions. In addition, the proliferation of academic journals, especially those that charge high publication fees, deepens inequalities in global health research and limits access for low-income countries. Documented cases of fabricated articles and false authorship in predatory journals highlight how AI can be misused, threatening evidence-based medicine and influencing healthcare decisions. To address these challenges, regulatory frameworks, ethical guidelines, and widespread digital literacy training for researchers and health professionals are critical. A balanced approach—harnessing the efficiency of AI while safeguarding scientific integrity—is needed to prevent an AI-driven infodemic and ensure the equitable, high-quality dissemination of medical knowledge.
A surge of AI-driven publications: the impact on health professionals and potential mitigating solutions
Arzilli, GuglielmoPrimo
;De Angelis, Luigi;Baglivo, Francesco;Privitera, Gaetano PierpaoloPenultimo
;Rizzo, CaterinaUltimo
2025-01-01
Abstract
The rapid development of generative AI is reshaping scientific communication, particularly in medicine and public health. Since the release of ChatGPT in 2022, Large Language Models have become widely accessible, supporting manuscript editing, statistical analysis, and rapid evidence synthesis. However, this surge in AI-generated content raises concerns about the quality, reliability, and ethical implications of scientific publishing. Increased reliance on AI-driven authoring tools could exacerbate an “infodemic”—an overwhelming flood of potentially unreliable or misleading information. This risk is exacerbated by the prevailing “publish or perish” culture, which prioritizes publication volume over meaningful contributions. In addition, the proliferation of academic journals, especially those that charge high publication fees, deepens inequalities in global health research and limits access for low-income countries. Documented cases of fabricated articles and false authorship in predatory journals highlight how AI can be misused, threatening evidence-based medicine and influencing healthcare decisions. To address these challenges, regulatory frameworks, ethical guidelines, and widespread digital literacy training for researchers and health professionals are critical. A balanced approach—harnessing the efficiency of AI while safeguarding scientific integrity—is needed to prevent an AI-driven infodemic and ensure the equitable, high-quality dissemination of medical knowledge.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


