HomeBlogFeedback Analyzing an Artificial Intelligence-Supported Assessment and Student Feedback by Aaron TrockiSeptember 24, 2024 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection Relationships Residential Learning Communities Service-Learning Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog In my prior blog post, I summarized lessons learned from piloting an artificial intelligence-supported assessment (AI-SA) and gathering student feedback. This work was framed by Clark and Talbert’s feedback loop model where I used feedback I gathered from students during fall semester 2023 to write three new AI-SAs, which were written and implemented in spring 2024. In that post, I also shared the first AI-SA I implemented in the spring, Use ChatGPT to Help You Write a Letter, along with some student feedback. Student writing submissions and feedback provided some evidence that they used ChatGPT in meaningful ways to support their learning. In this blog post, I share the second of these three assessments along with development details, student work samples, and student feedback. While writing the second AI-SA, I used the AI-SA framework and the five practices for student use of generative AI in assessments for learning. These five practices resulted from my analysis of a student focus group interview from fall 2023 and overlap with many of the guiding questions found in the AI-SA framework. Use generative AI in alignment with clear and agreed-upon expectations. Use generative AI as a tool for learning, not a replacement for demonstrating learning. Use generative AI as a starting place to efficiently explore concepts and generate examples of how concepts are applied. Use and reference generative AI output in AI-inactive portions of assessments. Use generative AI output critically and always assess its accuracy. I named the second AI-SA students completed that spring, Create Educational Materials for Calculus I Students. The assessment addressed two learning goals stated in the course syllabus: Evaluate derivatives using the definition and using the various rules to explain their relationship to the physical world. Use derivatives to solve a variety of problems including graph sketching, optimization problems, and related rates problems. Bold portions of each goal were directly assessed in this AI-SA. Related rates problems are a major application of calculus in this course, and students often struggle with the complexities involved in modeling related rates problems with appropriate equations and derivatives. My intent was for students to use ChatGPT to help them see the relevance of related rates to their areas of interest and see how calculus is a powerful tool to solve these problems. Assessment guidelines for the AI-SA, Create Educational Materials for Calculus I Students, are included below, along with examples of where particular guideline segments aligned with one of the five practices for student use of generative AI in assessments for learning. This AI-SA was completed after we covered the topic of related rates during class. Annotations on the Create Educational Materials for Calculus I Students assignment instructions, denoting which aspects of the assignment operationalize the five practices named from a previous blog post. When grading and giving students feedback on the educational materials they produced, I recorded some trends across submissions. In general, students found accurate information about the topic of related rates and applied what they found to solve a new related rates problem. Students also used ChatGPT to generate examples of related rates in their areas of interest. Of the twenty-seven submissions, there were eighteen different areas of interest represented with areas such as business and sports being the highest repeaters. Notice in these AI-SA guidelines that the students use ChatGPT to learn about related rates and methods and techniques of calculus used in related rates problems. However, students were not to ask ChatGPT to solve the related rates problem they chose to do at the end of the AI-Inactive portion. I suspect that some students just copied the related rates math problem, pasted it into a different ChatGPT conversation, and had ChatGPT solve it for them. I was aware of the potential for this to occur but was not overly concerned because ChatGPT (at the time of this writing) does not typically solve complex word problems in mathematics correctly. For instance, in the focus group interview I conducted last semester, students reported that ChatGPT gave them multiple different answers to the same math problem. Out of curiosity, I asked ChatGPT to solve each of the two related rates problems in this AI-SA, and ChatGPT provided an incorrect answer for each. The potential for ChatGPT to get math problems wrong forces conscientious students to use generative AI output critically and always assess its accuracy. All but two students solved their related rates question correctly, and I was impressed with the level of detail many students gave in their explanations of how to solve. One student opened her submission as follows: “Hey [classmate who missed class], I understand you’re looking for help with related rate problems. Don’t worry, I got you. In this paper, I’ll walk you through the concept of related rates, their significance, and the calculus methods we use to solve them. I’ll also provide a detailed example of a related rates problem and a step-by-step process to guide you through the solution.” After defining related rates and how calculus is used to solve these problems, this student walked their reader through a detailed problem-solving process by including an image of a diagram and work produced by hand. She then proceeded to explain her problem-solving process in two paragraphs and ended with: “Since both planes A and B are approaching the airport, the distance between them (d) decreases as they approach. Therefore, the rate of change for (d) is expected to be negative. My answer matches this prediction, so there is a good chance my answer is correct.” In the feedback I gave this student, I complimented her on using ChatGPT to develop a problem-solving process and then apply it critically by checking the reasonableness of her answer. The survey responses students gave related to this AI-SA spoke to what they learned through this assessment, including how ChatGPT did or did not support their learning. Some summarized survey responses are included below. Students reported that ChatGPT helped them “understand the mathematics involved in this writing project” on a scale of 1-5, with 1 being the lowest and 5 being the highest. The average ranking of 3.67 indicates that students found ChatGPT to be a helpful learning tool in understanding related rates math. However, when I read through student responses on the follow-up survey prompt where they explained why they gave the ranking, I found that many students perceived ChatGPT as limited in the help it could directly give on mathematics. The following student’s response is indicative of many received: “ChatGPT helped me somewhat understand the mathematics that were involved. It helped specifically when I asked it to explain in simpler terms.” Most students mentioned ChatGPT helping with definitions, examples, and general problem-solving steps, but many also mentioned that what they learned from class or in the textbook helped them as well. Students also responded to a survey item about ChatGPT and their writing. The average response on this survey item was 3.82, indicating that students found ChatGPT helpful in completing the writing portion of this project to a high degree. Analysis of student responses to the follow-up survey prompt where they explained why they gave the ranking revealed three common themes: ChatGPT gave definitions to immediately use in writing. ChatGPT gave examples to support points made in writing. ChatGPT brought together what was talked about in class. While ChatGPT seemed to help most students give written evidence of their understanding of related rates, a couple students reported that they did not use ChatGPT at all for their writing. After revisiting the grades and feedback I gave students on this AI-SA, along with the feedback they gave me in the post-submission survey, it seemed that students used ChatGPT in a more balanced way when compared to their submissions on the first AI-SA in the spring semester. Their submissions and feedback on this second AI-SA gave some evidence that they were becoming more comfortable with AI-active and AI-inactive portions of assessments, and with using ChatGPT when demonstrating their learning. In the next blog post, I will share the third and final AI-SA, Write and Produce a Public Service Announcement, that students completed along with samples of student work and summarized feedback. I encourage you to continue to consider ways you can translate what I have shared to promote responsible and productive use of generative AI in the disciplines you teach. References Clark, David & Robert Talbert. 2023. Grading for Growth: A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education. New York: Stylus Publishing, LLC. About the Author Aaron Trocki is an Associate Professor of Mathematics at Elon University. He is the CEL Scholar for 2023–2025 and is focusing on models of assessment and feedback outside of traditional grading assumptions and approaches. How to Cite this Post Trocki, Aaron. 2024. “Analyzing an Artificial Intelligence-Supported Assessment and Student Feedback.” Center for Engaged Learning (blog), Elon University. Sept. 24, 2024. https://www.centerforengagedlearning.org/analyzing-an-artificial-intelligence-supported-assessment-and-student-feedback/.