HomeBlogCEL Scholar Analyzing an Artificial Intelligence-Supported Assessment and Summarizing Key Takeaways by Aaron TrockiOctober 25, 2024 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog In this blog post, I share my analysis of the third artificial intelligence-supported assessment (AI-SA) students completed in the spring of 2024. If you have been following these posts, you will recall that this AI-SA was the last one administered in the spring 2024 semester, and that I developed those AI-SAs from lessons I learned from piloting ChatGPT in fall 2023. My work was structured under Clark and Talbert’s (2023) feedback loop where student feedback was used to develop my understanding and approach to engaging students in assessments for learning, which included student use of generative AI (Boud et al., 2018). Recap The first AI-SA students completed last semester was called, Use ChatGPT to Help You Write a Letter, and it guided students through using ChatGPT to demonstrate their understanding of two major concepts along with their plans for success in the course. In the second AI-SA, Create Educational Materials for Calculus I Students, students used ChatGPT to support their understanding of major concepts, explain how these concepts apply to particular areas where they expressed interest (e.g. marketing, sports, etc.), and produce a detailed explanation of how to solve an application problem. This second AI-SA built on students’ experiences from the first AI-SA and invited them to include visuals that would enhance the educational materials they submitted. Students responded to a survey after each AI-SA submission. Survey response summaries were promising and gave some evidence for the usefulness of ChatGPT in promoting student understanding of mathematics and for supporting their writing. Final assignment In the third and final AI-SA, Write and Produce a Public Service Announcement, I intended for students to continue to build on their experiences using ChatGPT in these assessments. The previous two AI-SAs were more static in nature, with their products being in written form with possible visuals added. The third AI-SA required students to submit a script of the public service announcement and to produce a video based on that script. I hoped that the video component would promote student engagement and creativity. While writing the third AI-SA, I also used the AI-SA framework and the five practices for student use of generative AI in assessments for learning. These five practices are shared below and overlap with many of the question items found in the AI-SA framework: Use generative AI in alignment with clear and agreed upon expectations. Use generative AI as a tool for learning, not a replacement for demonstrating learning. Use generative AI as a starting place to efficiently explore concepts and generate examples of how concepts are applied. Use and reference generative AI output in AI-inactive portions of assessments. Use generative AI output critically and always assess its accuracy. In the first two AI-SAs, I explicitly requested that students adhere to these practices in the assessment guidelines, however, in this third AI-SA, I chose to not explicitly write these practices into the assessment guidelines. I wanted to test if and to what degree these practices became normalized expectations for students when they used generative AI to support their completion of assessments for learning. Further, I felt that student work in this third AI-SA would be more reflective in nature and invited them to use ChatGPT as an aid in creatively documenting their learning gains. The third AI-SA guidelines are provided below. Write and Produce a Public Service Announcement In this assignment you will reflect on what you have learned so far and how to best approach learning in MTH 1510: Calculus I. You will work with a partner to write and produce a public service announcement video on how to understand one major concept and one significant application from our course. A public service announcement (PSA) is a message for the public interest disseminated by the media without charge to raise public awareness. In the PSA you create, assume your audience (public) is first-year students at Elon who are considering taking MTH 1510: Calculus I. You can learn about how to compose a PSA. ChatGPT ACTIVE: You are allowed to use ChatGPT as much as you like. Your public service announcement should accomplish/do the following: Be video recorded (like a Television PSA). You can video record yourselves talking, include any images you like, and/or record yourselves talking while images are displayed. Be creative. Address an audience of first-year students at Elon who are considering taking MTH 1510: Calculus I. (10 points) Explain what Calculus I is about. (See syllabus, textbook, and notes for some help here.) (10 points) Explain what is challenging about the course. (10 points) Explain what needs to be done to find success in the course. (10 points) Explain the major concept of derivatives. (20 points) Explain the significant application of optimization. (20 points) The video/audio should flow well and be polished like a real PSA. (20 points) Have fun with this, be creative, and think about your audience. You must write and submit a script of your PSA and upload to Moodle. The recording should be 30–90 seconds long, but it is alright to go over the time limit. You will upload your PSA script and video to Moodle. I enjoyed giving feedback and grading students’ PSA submissions and was impressed by the learning gains students evidenced that semester. It reminded me of what Rachel Forsyth, author of Confident Assessment in Higher Education, provided in her interview with me last year, that faculty should aim to enjoy grading because they are proud of what their students have done. Here is one of the many examples that impressed me, and it comes from two students who expressed much concern about being ready to meet the challenges of Calculus I earlier that semester. Their PSA video provides some evidence of what they learned and accomplished. In the feedback I gave these students, I complimented them on using ChatGPT to develop a script that explained the learning goals associated with Calculus I, major concepts, and how to find success in the course. I also gave feedback on the quality of some of the visuals used and creativity evident in their PSA submission. Student Survey Results All students in this class responded to a survey about their experience completing this AI-SA and how ChatGPT did or did not support their learning. Some summarized survey responses are included below. Students reported that ChatGPT helped them “understand the mathematics involved in this writing project” on a scale of 1-5, with 1 being the lowest and 5 being the highest. The average ranking of 3.21 indicates that students found ChatGPT helpful for understanding mathematics. Student responses on the follow-up survey prompt, where they explained why they gave the ranking, shed some light on these quantitative results. Students who indicated 1 (very low) typically explained that they chose to rely on their notes and/or textbook, whereas students who indicated 3 or above explained how ChatGPT assisted with their understanding of major concepts. One student shared, “It gives a very organized description, so it is very helpful. I need to see examples and be walked through it to sort of fully get a grasp on it.” Students also responded to a survey prompt about ChatGPT and their writing of the PSA script. The average response on this survey item was 3.07, and indicates that students generally found ChatGPT helpful in supporting their writing. However, results also indicate a higher response of 1 (“very low”). My analysis of students’ responses to the follow-up survey prompt, where they explained why they gave the ranking, did not reveal much detail about the high frequency of 1 (“very low)” ranking. Students who gave that response simply stated that they did not use ChatGPT in their writing. Analysis I speculate that the creative opportunity inherent in this AI-SA may have led some students to drastically limit or exclude ChatGPT in their script writing. Students who gave higher rankings to this survey prompt gave explanations of how ChatGPT helped. These explanations aligned with explanations given in previous AI-SAs, such as “ChatGPT gave definitions to immediately use in writing” and “ChatGPT gave examples to support points made in writing.” To further analyze students’ perceptions of this AI-SA, I reconsidered the five practices for student use of generative AI in assessments for learning regarding the students’ PSA submissions, my feedback, and their survey responses. I developed the table below to summarize my impressions of how well each practice was or was not evident based on this analysis. PracticeMy Impression1. Use generative AI in alignment with clear and agreed upon expectations. The majority of students seemed to use ChatGPT in line with expectations we established prior to this AI-SA. ChatGPT was used to increase understanding of concepts, provide definitions, and clarify how concepts and procedures are applied. 2. Use generative AI as a tool for learning, not a replacement for demonstrating learning. I found it difficult to gauge the degree to which students exhibited practice 2. This was likely due to the reflective nature of the AI-SA and the potential for students to quote or paraphrase the textbook or ChatGPT when responding to prompts that required explanation. A few students reported that the assignment helped them pull major conceptual threads together towards the end of the semester. 3. Use generative AI as a starting place to efficiently explore concepts and generate examples of how concepts are applied. Based on both survey responses and submitted PSAs, it seems that students exhibited practice 3 to a high degree. Many students reported using ChatGPT to generate examples of how concepts are applied in various contexts. These contexts often aligned with a topic or academic major students chose. 4. Use and reference generative AI output in AI-inactive portions of assessments. Of students who used ChatGPT, the vast majority seemed to use AI output in AI-inactive portions of this assessment. For example, although the AI-SA guidelines did not contain explicit AI-Active and AI-Inactive components, only one student reported that ChatGPT did much of the writing for them. Students reported using AI to produce an outline or a script that they revised for their own purposes. However, most students did not formally reference generative AI output with in-text citations. This is likely due to the assessment guidelines I provided and the nature of the requested product. 5. Use generative AI output critically and always assess its accuracy. Close to half of the students who reported using ChatGPT also reported cross-checking AI output with their textbook and notes. The other students who reported using ChatGPT did not indicate any cross-checking work, but most of these students explained that AI output clarified and confirmed what they already knew. Recall that I had hoped that these practices would have become further normalized as expectations for student use of generative AI when they completed this AI-SA. This led me to remove explicit requests for these practices in the student guidelines for this AI-SA, Write and Produce a Public Service Announcement. In general, students used ChatGPT in ways that aligned with the practices I hoped to see. However, I did not require students to submit a link to their ChatGPT conversations and my conclusions are only based only on survey findings and analysis of student PSA submissions. Analysis of student writing and feedback in this and the previous two blog posts gives evidence of the educative potential of generative AI in higher education in general, and in assessments for learning in particular. Increasing this educative potential is challenging for university faculty, as student use of generative AI invites many concerns around what it means to demonstrate learning. I have found a few practices that significantly assisted me in beginning to address this challenge. Faculty should deeply consider their instructional purposes before deciding if and how students should utilize generative AI. Disciplinary content knowledge provides a key lens for considering what is appropriate to do with generative AI. Expectations for student use of generative AI should be explicitly shared, discussed, and agreed upon. Building trusting relationships allows for partnerships with students that can help higher education address best practices for incorporating generative AI. In subsequent blogs, I intend to share student work and feedback on assessments for learning in which they have a choice to use either generative AI or internet searches to support their documentation of learning. I will couple this effort with looking at some recent literature on assessment practices in higher education. References Boud, David, Phillip Dawson, Margaret Bearman, Sue Bennett, Gordon Joughin, and Elizabeth Molloy, 2018. “Reframing Assessment Research: Through a Practice Perspective.” Studies in Higher Education 43(7): 1107-1118. Clark, David & Robert Talbert. 2023. Grading for Growth: A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education. New York: Stylus Publishing, LLC. Forsyth, Rachel. 2023. Confident Assessment in Higher Education. Los Angeles: SAGE Publications, Inc. About the Author Aaron Trocki is an Associate Professor of Mathematics at Elon University. He is the CEL Scholar for 2023–2025 and is focusing on models of assessment and feedback outside of traditional grading assumptions and approaches. How to Cite this Post Trocki, Aaron. 2024. “Analyzing an Artificial Intelligence-Supported Assessment and Student Feedback.” Center for Engaged Learning (blog), Elon University. Sept. 24, 2024. https://www.centerforengagedlearning.org/analyzing-an-artificial-intelligence-supported-assessment-and-student-feedback/.