HomeBlogCEL Scholar AI Hallucinations Matter for More Than Academic Integrity by Amanda Sturgill October 14, 2025 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Data Literacy Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Signature Work Student Leadership Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog Image generated using AI with the following prompt: Create a black-and-white line art image that views over the shoulder a person looking at a large map she is holding. On the map should be several Xs marking points of interest and one cartoon robot. There should be a text window next to the robot filled with 1s and 0s to represent binary code. I had to chuckle over this quote in a May New York Times article: “Though they are useful in some situations—like writing term papers, summarizing office documents and generating computer code—their mistakes can cause problems.” I think many of my fellow academics would see even mistake-free use of generative AI as causing problems with most of the activities named. But an article I found from the BBC raised a whole new kind of alarm. The article discusses travelers who use generative AI to plan trips and find that the landmarks they try to see do not exist. Consider the number of practical concerns that AI hallucinations might cause for experiential learning. The BBC article cites a Peruvian tour guide who encountered other tourists planning a trek to a fictitious location suggested by AI. The guide stated that this kind of travel in Peru, in particular, could be life-threatening if tourists found themselves lost at high altitude and with no phone signal. And with highly realistic writing, photos, and now video through tools like the recently released Sora from ChatGPT creator OpenAI, the hallucinations create some very realistic impressions. When our students study abroad in Europe, they are often excited about visiting a lot of destinations on the weekend, and it’s easy to see AI research leading to hazardous results. From Fake Landmarks to Real Dangers It’s an issue for more than studying away. I remember in high school chemistry when my teacher carefully added a small piece of sodium to a beaker of water, and it seemed to catch fire and dance around the top. The lesson was that we’d generally consider water a safe chemical in the lab, but that safety is relative. That kind of context matters, and asking generative AI to explain research design can sometimes highlight this. For example, I was able to get Google’s Gemini to tell me how to set up a demonstration like a viral video I saw in which a physics professor uses a pendulum to swing a heavy weight at his head. The physics works, but you do have to know some context like not to push the weight. Microsoft’s Co-Pilot gave me a breakdown of how to run the Milgram Experiment. Some prompts give cautions, others don’t, and it would be hard for a novice to know the difference. In the same way, for community-based learning, using AI as a resource could cause problems. Whether it’s asking AI to identify locations in a community or to explain a behavior, the AI can confidently get things wrong. I asked Co-Pilot to identify places of interest near my own university, and it identified a museum, but got the opening days wrong. I asked it why people moved out of a region of a nearby town where previous students had done oral history, and it completely missed a large industrial accident that was a major push factor. Interestingly, when I asked why people moved in, it was correct, but mostly cited my students’ work. Relying on generative AI information could have bad physical and social ramifications for community-based learning as well. Why Context Still Matters in the Age of AI The takeaway from this is communication. Students can bring their curiosity and interest in emerging technologies into the classroom, but faculty need to know that is happening. In this way, they can learn together about what these tools can and can’t do. References Brown, Lynn. 2025. “The Perils of Letting AI Plan Your Next Trip.” BBC, September 29, 2025. https://www.bbc.com/travel/article/20250926-the-perils-of-letting-ai-plan-your-next-trip. Huang, Kalley. 2023. “Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach.” The New York Times, January 16, 2023. https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html. Metz, Cade. 2021. “A.I. Can Now Write Its Own Computer Code. That’s Good News for Humans.” The New York Times, September 9, 2021. https://www.nytimes.com/2021/09/09/technology/codex-artificial-intelligence-coding.html. Metz, Cade, and Karen Weise. 2025. “A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse.” The New York Times, May 5, 2025. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html?unlocked_article_code=1.rU8.szpR.zxUakRtHm0zh&smid=url-share. OpenAI. 2025. “Sora 2 Is Here.” OpenAI, September 30, 2025. https://openai.com/index/sora-2/. Wikipedia. 2025. “Milgram Experiment.” Last modified September 26, 2025. https://en.wikipedia.org/wiki/Milgram_experiment. YouTube. 2025. “When a Physics Teacher Knows His Stuff !!” YouTube video, 0:53. Posted by Lectures by Walter Lewin, October 2025. https://youtu.be/77ZF50ve6rs?si=TZwIEuLdcgVXX9OG. About the Author Amanda Sturgill, associate professor of journalism, is the 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience. How to Cite This Post Sturgill, Amanda. 2025. “AI Hallucinations Matter for More Than Academic Integrity.” Center for Engaged Learning (Blog). October 14, 2025. https://www.centerforengagedlearning.org/ai-hallucinations-matter-for-more-than-academic-integrity.