Some of the early conversation about generative AI in education was its affordances for cheating. There were scores of social media posts. Institutions held discussions about how much, if any, to invest in AI detection software. And several thinkpieces opined about the risks to student learning that a magic answer machine posed. These were questions of academic integrity. 

Meme showing confused anime dude (Katori Yutaro, the hero of the 1991 anime Taiyou no Yuusha Fighbird [Brave of the Sun Fighbird]) and a butterfly. Overlays identify anime dude as "Professor" and the butterfly as "AI assignment" and ask "Is this learning?"

Academic integrity matters – so much so that my own institution requires students to sign the honor code in a first-year ceremony. We’ve seen college administrators shamed into resigning after deep dives into their decades-old dissertations with an eye for sloppy citation.  There is even a foundation-funded “International Center for Academic Integrity.” 

That society publishes a document called “The Fundamental Values of Academic Integrity,” which lists those values as honesty, trust, fairness, respect, responsibility, and courage. “Without them,” the report states, “the work of teachers, learners, and researchers loses value and credibility” (page 4).

When we worry about AI replacing student effort, trust seems like it matters the most. I know that there are plenty of ethical issues that accompany AI into higher education, for this post, I’d like to just look at some pragmatic ones related to trust.

Why do we acknowledge work we did not do or ideas we did not originate when we incorporate them into our own? The right to benefit from one’s labor plays in here, but individual factors aside, showing sources benefits the audience of our work. 

If I bring in an idea from someone else’s study and acknowledge the source, you, as the audience, could go back to that original study and evaluate its quality for yourself. If I fail to do so, I deny your right to fully consider the quality of ideas. We teach students that consideration and comparison and evaluation is part of the logic of peer review. If the writing or ideas from AI aren’t identified, the audience loses that right to evaluation.

Moreover, Generative AIs can create “hallucinated” reference lists, where there is nothing to evaluate at all. This logic of right to review is one reason many academic journals have stipulations on the potential use of AI in creation of works it might publish, according to a summary in Editage (Manna 2023). For students participating in undergraduate research, this might be particularly salient. 

But this principle also applies to work done for course assignments.  How do we know that a student developed the facility to think critically in a field if the only product we see is a work a machine can write? Although the existing academic integrity penalty system might help, it sets up a confrontational model in which it’s hard for the teacher to win (Liu et al. 2024). The solution could be to incentivize students to do their own work through graded, scaffolded assignments.  It might also be a time to re-think assessment itself, depending on the goals of the field. 

It is worth noting, though, that for work done outside of the academy in fields like journalism and marketing, AI-enhanced writing is rapidly being embraced as a productivity measure. The line academics who teach will walk between academic standards that require integrity and students who expect to be prepared for first jobs is a fine one. 

References

Manna, Debraj. 2023. “Evolving Journal Guidelines for the Use of AI.” Editable. August 31, 2023. https://www.editage.com/insights/evolving-journal-guidelines-for-use-of-ai.

Liu, Jae Q. J., Kelvin T. K. Hui, Fadi Al Zoubi, Zing Z. X. Zhou, Dino Samartzis, Curtis C. H. Yu, Jeremy R. Chang, and Arnold Y. L. Wong. 2024. “The Great Detectives: Humans versus AI Detectors in Catching Large Language Model-Generated Medical Writing.” International Journal for Educational Integrity 20 (1). https://doi.org/10.1007/s40979-024-00155-6.

“What Are AI Hallucinations.” n.d. IBM https://www.ibm.com/topics/ai-hallucinations.

About the Author

Amanda Sturgill, Associate Professor of Journalism at Elon University, is the 2024-2026 CEL Scholar, focusing on the intersection of artificial intelligence (AI) and engaged learning in higher education. Connect with her at asturgil@elon.edu.

How to Cite this Post

Sturgill, Amanda. 2024. “AI and Academic Integrity.” Center for Engaged Learning (blog), Elon University. July 5, 2024. https://www.centerforengagedlearning.org/ai-and-academic-integrity/.