HomeBlogCEL Scholar Exploring Student Choice in Artificial Intelligence-Supported Assessments for Learning (Part 1) by Aaron TrockiNovember 29, 2024 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection Relationships Residential Learning Communities Service-Learning Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog In this series of blog posts, I began by exploring models of assessment and feedback and offering some perspectives on benefits and drawbacks from various models. The practice perspective as described by Boud et al. (2018) proved valuable in my exploration. This perspective accounts for the “before, during, and after” of assessment and effects on students and faculty. I used this perspective to help me reconsider the models and practices I had been using. An emergent theme in this work was questioning what faculty should consider “evidence” when students demonstrate learning. Addressing this theme led me to consider how generative artificial intelligence (AI) may or may not play a productive role when students attempt to give evidence of their learning in higher education. A New Assessment I recently read O’Neill and Padden’s findings on their survey of faculty about diversifying assessment practices (2021). They found that faculty perceived student choice as leading to empowering students and accommodating the learning of diverse students. With these findings in mind and my own curiosity about the prevalence of student use of generative AI, I decided to develop a new assessment for learning. This new assessment was a revision of an artificial intelligence-support assessment (AI-SA) called Use ChatGPT to Help You Write a Letter Home that I used last spring and shared in a previous blog post. I revised that AI-SA to give students a choice in what technology—generative AI or internet search—they would like to utilize to support their completion of this assessment. To set up student outcomes on the revised AI-SA, Write a Letter Home, I explained to students that they must first choose between ChatGPT or an internet search to serve as their technology support. Of the 29 students in class, 23 chose to use ChatGPT and 6 chose to use an internet search. After students’ technology commitments were recorded, I handed out the written assessment guidelines. The assessment guidelines were the same for each technology-support choice except for what technology the student was to use. The essential components of the guidelines are provided below with the technology choice bolded for emphasis. Write a Letter Home First choose if you will use the technology ChatGPT or if you will use the internet. Once you choose, you must stick with your choice. In your writing submission, you will reflect on what you have learned so far in MTH 1510 by writing a letter home. Pick a family member to write to (mom, dad, sibling, etc.) and write them a letter. The key to writing a quality letter is to write in a way that your non-expert reader will understand. Your letter should accomplish the following. Shares the title and main idea/purpose of this course; Use what you have learned in this course and your discussion with ChatGPT or your internet search to explain what major concepts you have learned about so far. Be sure to include the following: Average velocity Instantaneous velocity Secant slope Tangent slope Limits Discusses what has surprised you about the course so far; Explain your plans for success in the rest of the semester in this course. The letter must be over one page, use 12-point font (Times New Roman or Calibri), and be single spaced. Impact of Feedback After students submitted their letters, I used our learning management system (LMS) to give feedback and grades. During that process, I was also reading the early chapters of Josh Eyler’s book, Failing Our Future: How Grades Harm Students, and What We Can Do About It. Eyler asks an important question, “Is the intent of giving a mark married to the effect on the student?” (Eyler 2024, 2). Reading and thinking about his question caused me to consider whether the feedback I have been giving to students impacts my students how I intend. For instance, does student feedback I give (which is intended to help them persevere in their understanding of mathematics) actually have that effect? While reading students’ letters and giving feedback, I strove to make the feedback positive for their learning and for their views on how math can be applied to the world. I commented on their plans for success in this course such as, “I appreciate your plans for success and am happy to offer help in office hours as needed.” When I graded similar letters last spring, I simply checked a rubric box that read, “addressed plans for success in this course.” Regarding the mathematics addressed in students’ letters, I looked for opportunities to recognize accurate explanations of concepts that reflected students’ creativity and/or interests. One letter used the context of kinematics to explain how calculus concepts are applied. Part of my feedback read, “I appreciate the context of kinematics to explain concepts and applications to your reader. Including units and labels to your results will help your non-expert reader grasp these applications of calculus.” Although it took substantial effort, I enjoyed giving students feedback that I hoped would have positive effects on their learning that semester. After revisiting the grades and feedback I gave students on this assessment, I found that most students achieved a high grade. I suspect this may be due to the detailed learning outcomes embedded in the assessment guidelines. Wood (2021) distinguishes between learning goals and learning outcomes with the latter being specific to students’ actions based on operational verbs (Wood 2021, 50). My students are also motivated to maintain a high grade in this course, and this outside-of-class assessment allowed them to ensure the mathematics they chose to include was correct. Overall, students’ letters reflected their developing understanding of calculus concepts and how these concepts are applied. How Did Students Respond? In addition to findings related to grades and feedback I gave students, I also gathered survey responses from students about their choice between ChatGPT and the internet and how that technology choice affected their ability to provide evidence learning on this assessment. In Part 2 of this two-part blog post, I will share an analysis of these findings to inform our understanding of students’ perceptions of generative AI related to academics. I encourage you to think about ways to purposefully incorporate generative AI and internet searches into tasks you ask your students to complete. How might you predict students will justify their choices of technology (e.g., ChatGPT or internet) use in academics? We will explore this question, and other questions related to technology use in assessments for learning in the next blog post. References Boud, David, Phillip Dawson, Margaret Bearman, Sue Bennett, Gordon Joughin, and Elizabeth Molloy. 2018. “Reframing Assessment Research: Through a Practice Perspective.” Studies in Higher Education 43 (7): 1107-1118. https://doi.org/10.1080/03075079.2016.1202913. Driscoll, A., Wood, S., Shapiro, D., and Graff, N. 2021. Advancing Assessment for Student Success: Supporting Learning by Creating Connections Across Assessment, Teaching, Curriculum, and Cocurriculum in Collaboration with Our Colleagues and Our Students. New York: Routledge. https://doi.org/10.4324/9781003442899. Eyler, Joshua. 2024. Failing Our Future: How Grades Harm Students, and What We Can Do about It. Baltimore: Johns Hopkins University Press. O’Neill, Geraldine, & Lisa Padden. 2021. “Diversifying Assessment Methods: Barriers, Benefits and Enablers.” Innovations in Education and Teaching International 59 (4): 398–409. https://doi.org/10.1080/14703297.2021.1880462. About the Author Aaron Trocki is an Associate Professor of Mathematics at Elon University. He is the CEL Scholar for 2023–2025 and is focusing on models of assessment and feedback outside of traditional grading assumptions and approaches. How to Cite this Post Trocki, Aaron. 2024. “Exploring Student Choice in Artificial Intelligence-Supported Assessments for Learning (Part 1).” Center for Engaged Learning (blog), Elon University. November 29, 2024. https://www.centerforengagedlearning.org/exploring-student-choice-in-artificial-intelligence-supported-assessments-for-learning-part-1/.