HomeBlogCEL Scholar When it Comes to AI in Education, Timing May Matter by Amanda Sturgill August 12, 2025 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Signature Work Student Leadership Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog Figure 1. Image generated by ChatGPT with the prompt: Concept: A passport-style document labeled “Learning Journey.” One page has a large “AI Used” stamp. Another has question marks over the “Reflections” and “Integrations” sections. I’ve participated in several professional education opportunities this summer related to AI as a tool for higher education, and one of my biggest takeaways is that there is little agreement among faculty. Even in a tiny academic unit of fewer than ten people, you will find every opinion from AI being a great timesaver that instructors and students alike can benefit from tremendously to AI is an existential threat to all that academia tries to do. Being a centrist at heart, I tend to believe the answer lies somewhere in the middle, and that raises the question of when AI use does more harm than good. Starting with AI: A Help or a Hindrance? MIT’s Media Lab published work this summer (Kosmyna 2025) on the issue of “cognitive debt” as a consequence of using AI tools. The intriguing study involved brain scans of participants who were assigned to groups that used or did not use AI tools as a part of a writing task. The researchers found clear differences between the scans of across different groups. Those who used large language model (LLM) AI assistance had less brain activity than those who did the task without assistance. LLM users were also less able to quote their own work than were those who wrote without assistance. What was more interesting to me, though, was change over time. Some participants were shifted between groups for a fourth session and those who wrote without tools in the first did a better job in the second even though they could now use an LLM. When the situation was reversed, using AI on a first task and then not on a second, preliminary data suggested that initially relying on LLM help early on affected later performance, even if the LLM use stopped for the later task. That’s a powerful finding. The study was limited to writing tasks, but because writing tasks pervade high-impact practices, it raises some questions for me. For Campus-Based HIPs For novel writing tasks requiring critical thinking such as creative problem-solving, strategic planning or metacognition, what part of the project is suitable for LLM use in order to preserve critical thinking time? Is the time when you might incorporate an LLM dependent on your learners’ experience? For example, how would students in a first-year seminar be different from students in a capstone course? For projects that are done with others, how might LLM use affect learning differently for members of a group? For Off-Campus HIPs What do research or internship supervisors expect with respect to LLM use? How might those expectations affect student learning? When students engage with community members, how are LLMs used by those community members? How might LLM use affect the student’s ability to integrate experiences across contexts when going off campus (or further abroad)? The write-up is lengthy, but it is also packed with food for thought for teachers, so I’d encourage you to read it in full. References Kosmyna, Nataliya. 2025. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” MIT Media Lab, June 10, 2025. https://www.media.mit.edu/publications/your-brain-on-chatgpt/. About the Author Amanda Sturgill, associate professor of journalism, is a 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience. How to Cite this Post Sturgill, Amanda. 2025. “When it Comes to AI in Education, Timing May Matter.” Center for Engaged Learning (blog). August 12, 2025. https://www.centerforengagedlearning.org/when-it-comes-to-ai-in-education-timing-may-matter/.