HomeBlogFeedback Applying Lessons Learned about Student Use of Generative AI in Academicsby Aaron TrockiAugust 1, 2024 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection Relationships Residential Learning Communities Service-Learning Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog In previous blog posts, I shared the results of a focus group interview about students’ perceptions of using generative AI in their academics. The focus group interview took place at the end of fall semester 2023 after my students completed an artificial intelligence-supported assessment (AI-SA) in my Calculus I course. Going into spring semester of this year, I used some of the lessons I learned from that initial venture with generative AI and assessment to document students’ feedback and develop new AI-SAs. While planning and writing these new AI-SAs, I revisited the feedback loop proposed by Clark and Talbert (2023): (Clark and Talbert 2023, p. 12) Clark and Talbert explained that traditional grading lacks these feedback loops in which faculty give meaningful feedback that students act upon to change their previous thinking and what they give as evidence of learning. Feedback is essential to authentic learning. In fall semester 2023, I applied this feedback loop when engaging students in assessments for learning as described by Boud et al. (2018). However, in spring 2024 this framework helped structure my use of student feedback, where I did something (fall 2023) in piloting an AI-SA with calculus students, got feedback (fall 2023) in the form of student survey and focus group interview responses, thought about the feedback (January 2024) between semesters, and made changes (spring 2024) to previously used assessments. I changed three previously used assessments to incorporate generative AI in ways that reflected my understanding of student feedback. Clark and Talbert’s feedback loop applies to faculty learning and growth as well. In January 2024, I thought about student feedback and distilled what I learned down to five practices for student use of generative AI in assessments for learning. Use generative AI in alignment with clear and agreed upon expectations; Use generative AI as a tool for learning not a replacement for demonstrating learning; Use generative AI as a starting place to efficiently explore concepts and generate examples of how concepts are applied; Use and reference generative AI output in AI-inactive portions of assessments; Use generative AI output critically and always assess its accuracy. In this blog post, I share how I applied some of what I learned to write new AI-SAs, which I implemented this past spring in a Calculus I course. During that pilot implementation, I once again gathered feedback in the form of survey responses students gave after they finished each AI-SA and at the end of the semester. The first new AI-SA is described below along with selected summarized student feedback. AI-SA: Use ChatGPT to Help You Write a Letter This was the first AI-SA students completed. Early in the semester they wrote a letter home about what they had been learning in Calculus I and their plans for success. In the AI-active portion students explored foundational concepts such as velocity and limits and were to solidify their understanding of these concepts before moving to the AI-inactive portion of writing their letter. Here is an excerpt from the guidelines for the AI-inactive portion: ChatGPT-Inactive Make sure you’ve copied and pasted the link to your ChatGPT output at the top of your writing submission right below your name. You may reference the ChatGPT output in your written-by-you letter, but do not ask ChatGPT anything new. In your letter, you will reflect on what you have learned so far in MTH 1510: Calculus I by writing a letter home. Pick a family member to write to (mom, dad, sibling, etc.) and write them a letter. The key to writing a quality letter is to write in a way that your non-expert reader will understand. Your letter should accomplish the following. Shares the title and main idea/purpose of this course; Uses what you have learned in this course and your discussion with ChatGPT to explain what major concepts you have learned about so far. Be sure to include the following:Average velocityInstantaneous velocitySecant slopeTangent slope Limits Discusses what has surprised you about the course so far; Explains your plan for success in the rest of the semester in this course. The letter must be over one page, use 12-point font (Times New Roman or Calibri), and be single spaced. After students submitted their letters, I read through and offered feedback. Many of the submissions stood out to me as exemplars for demonstrating evidence of learning. I found that ChatGPT assisted most students with explaining these challenging concepts and applications for their non-expert readers. Part of one letter submission read as follows: Dear Mom, I am writing this letter to introduce you to the world of calculus! I myself have just been introduced to this topic, so I’m hoping I can teach it to you in a clear and concise manner. Calculus helps us analyze how things move and change. To explain the next few topics I will be discussing, I will be using a simple and hopefully relatable example, a road trip. I’ll use this example to compare Calculus topics I have learned to events you may experience on a road trip. Now imagine we are on a road trip. Average velocity is like checking your overall speed for the whole trip. […] In the AI-Active work this student submitted, they used ChatGPT to come up with a context (road trip) to contextualize and explain major foundational concepts. Exploring Student Use of Generative AI in the Assignment In this AI-SA, students submitted a link to their ChatGPT conversation, which allowed me to compare that conversation to what they ultimately reported in their letter. I suspect that this expectation encouraged students to use generative AI as a tool to help them learn as opposed to using it as a replacement for evidencing their learning. This positive finding and others were reflected in the survey responses students gave after they submitted their letters for feedback and grading. Some summarized survey responses are included here. These results give some evidence for generative AI in supporting student learning of disciplinary content with an average of 3.66. The follow up survey question asked students to briefly explain their answer choice. Two student responses are provided below that support the high average regarding ChatGPT helping them understand math. It kind of just brought everything together in a way I hadn’t thought about before and felt like a very full circle moment because it brought up a lot of stuff we have been talking about and it actually started to make sense reading it in a different context like that. ChatGPT helped provide me with definitions of the terms we used in the letter in a simpler way that was easier for me to understand. It also provided examples of the terms which helped me grasp the ideas better because I could see them firsthand. The next survey question asked students about ChatGPT’s role in helping them with their writing. The summarized survey results were once again promising with an average of 3.62. Twenty-seven of these twenty-nine students also chose to respond to the follow up survey question that asked them to explain their answer choice. The responses fell under one of four themes that are summarized with supporting quotes and frequency of theme occurrence. Emergent ThemesSupporting Quote form Follow Up Survey QuestionChatGPT helped by clarifying concepts and giving examples (n = 15)It did help give me simple ways to explain the topics I was given for the writing assignment. I always think ChatGPT is an easy way to generate ideas on how to work things.ChatGPT helped with outlining and structuring their letter (n = 5)I followed the outline from ChatGPT to write my letter. It worked as a guide for me.ChatGPT did not help much with their writing(n = 4)I mostly did the writing portion on my own. I took what ChatGPT said and changed the example into something that I thought was better.ChatGPT helped too much with student writing and replaced some of the evidence for student thinking (n = 3)ChatGPT gave long explanations and did a lot of the writing for me aside from personal comments.ChatGPT wrote a big chunk of what I put on the paper. These results intrigued me as some students seemed to rely on ChatGPT in a balanced way when demonstrating their understanding while others either dismissed or over relied on ChatGPT. Because I had access to each student’s conversation with ChatGPT, I was able to give meaningful feedback regarding their technology use. For instance, for students who quoted or paraphrased ChatGPT extensively, I gave feedback that cautioned them against letting ChatGPT do the writing for them. I further indicated that I would assess their level of reliance on ChatGPT when I graded their next AI-SA. In this way, I hoped that students would use my feedback to make changes to how they demonstrate evidence of their learning when ChatGPT is available for their use. After I provided feedback to my students and their grade through our learning management system, I revisited and revised the drafts of assignment guidelines for the next two AI-SAs I planned to use that semester. I intended for students to continue to use ChatGPT in powerful ways to support their understanding of math concepts while encouraging caution on how much they relied on ChatGPT when producing written evidence of their learning. In the next blog post, I will share the second AI-SA students completed along with samples of student work and summarized feedback. I encourage you to think about ways you can translate what I have learned when piloting ChatGPT in assessments to promote student growth with this technology in the disciplines you teach. References Boud, David, Phillip Dawson, Margaret Bearman, Sue Bennett, Gordon Joughin, and Elizabeth Molloy. 2018. “Reframing Assessment Research: Through a Practice Perspective.” Studies in Higher Education 43 (7): 1107-1118. Clark, David, and Robert Talbert. 2023. Grading for Growth: A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education. New York: Stylus Publishing, LLC. About the Author Aaron Trocki is an Associate Professor of Mathematics at Elon University. He is the CEL Scholar for 2023–2024 and is focusing on models of assessment and feedback outside of traditional grading assumptions and approaches. How to Cite this Post Trocki, Aaron. 2024. “Applying Lessons Learned about Student Use of Generative AI in Academics.” Center for Engaged Learning (blog), Elon University. August 1, 2024. https://centerforengagedlearning.org/applying-lessons-learned-about-student-use-of-generative-ai-in-academics