Designing Politics and IR Assessments in the Era of AI: An Empirical Investigation into ChatGPT’s Output Across Bloom’s Revised Taxonomy

Designing Politics and IR Assessments in the Era of AI: An Empirical Investigation into ChatGPT’s Output Across Bloom’s Revised Taxonomy

By Matthias Dilling, Trinity College Dublin and Leah Owen, Swansea University

ChatGPT (and generative AI in general) is often presented as doing for writing-based pedagogy what pocket calculators did for maths: automating lower-level tasks to free up time for higher-level learning. Is this true for politics essays? We find that it is not, based on the mixed-methods analysis of a large corpus. Weaknesses at engaging with empirical evidence, ongoing issues of hallucination and misattribution, and oftentimes trivial evaluative statements render it a poor foundation for teaching and learning in politics. Instead, it can serve as a signpost to other content or basis for academic integrity exercises.

Read more.


The Journal of Political Science Education is an intellectually rigorous, path-breaking, agenda-setting journal that publishes the highest quality scholarship on teaching and pedagogical issues in political science. The journal aims to represent the full range of questions, issues and approaches regarding political science education, including teaching-related issues, methods and techniques, learning/teaching activities and devices, educational assessment in political science, graduate education, and curriculum development.