AI Tools in Academic Publishing: Opportunities and Ethical Boundaries
AI is entering every stage of the academic publishing workflow. Here is an honest assessment of where it genuinely helps, where it creates risks, and what shared norms the community needs to establish.
In the past two years, artificial intelligence tools have entered academic publishing in ways that were barely anticipated: from manuscript screening and reviewer matching to large language models used by authors to draft, edit, and translate their work. The pace of change is outrunning the norms that govern it. This post aims to provide a clear-eyed assessment of the current landscape.
Where AI Genuinely Adds Value
Several applications of AI in publishing have clear, defensible benefits. Plagiarism and similarity detection has been AI-driven for years and is widely accepted. Automated screening for basic formatting and reporting standards reduces the burden on editors without affecting editorial judgment. Reviewer-matching tools that analyse research profiles can reduce the time to finding qualified reviewers, a process that currently accounts for a significant share of editorial delay. Language editing tools that help non-native English speakers express their research clearly serve a genuine equity function.
Where the Risks Are Real
The use of large language models (LLMs) to generate sections of manuscripts — or entire manuscripts — poses serious integrity risks. LLMs confabulate: they produce plausible-sounding text that may contain fabricated citations, false claims, or misrepresented methodology. When authors use these tools without rigorous verification, the result can be research that is wrong in ways that is difficult to detect.
AI use in peer review is a different but equally serious concern. Uploading a confidential manuscript to a public LLM service breaches the confidentiality of the review process and may expose unpublished findings to unauthorised parties.
What the Community Needs
Clear, enforceable author disclosure requirements for AI use in drafting and analysis. Journal policies on AI in peer review that are unambiguous and consistently applied. Infrastructure for verifiable author contributions — including distinguishing AI-assisted from AI-generated content. And ongoing conversation across disciplines about where AI assistance crosses a line that undermines the integrity of the record.
Xpertia supports the development of these norms and will be publishing updated author and reviewer guidelines on AI use in the coming months. In the meantime, the guiding principle is simple: AI may assist, but the author bears full responsibility for every claim in the published work.
