More A than I: Testing for Large Language Model Plagiarism in Political Science
By Robert Keener, The University of Tennessee
This article shows how the sudden introduction of large language models (LLMs) has allowed a sudden, significant increase in the ability of political science professionals to plagiarize their articles by prompting LLMs to write for them. Evidence of this is shown through a brief overview of the limitations of LLMs and by searching for words that are disproportionately used by the most popular LLM, ChatGPT, in peer-reviewed articles. What is found is a rapid spike in the use of words that are unremarkable except for their popularity in ChatGPT’s output, as determined by an AI professional. This shows that this method can be used to indicate the likelihood of plagiarism in a given article. It then concludes with the limitations of this keyword detection method and recommendations for limiting LLM plagiarism in the field of political science as a whole.
- Read the full article.
- PS: Political Science & Politics , Volume 58 , Issue 4 , October 2025.