In part one of this two-part blog post, I shared the revision of an artificial intelligence-supported assessment (AI-SA) I had used in spring 2024. The revised version gave students a choice in what technology (ChatGPT or internet search) they could use to support their understanding and writing. I implemented the revised assessment in fall 2024, where students chose a technology (ChatGPT or internet search), investigated calculus concepts and applications, and then wrote a letter home to share their understanding with a non-expert reader.  

Students’ letters reflected their developing understanding of calculus concepts and how these concepts are applied. I provided feedback to students that I hoped would have positive effects on their learning. Reading Josh Eyler’s book Failing Our Future (2024) drew my attention to the potential effects the marks and feedback I give have on students and caused me to consider students’ perspectives when grading this assessment. 

Why did students choose to use AI (or not)?  

After submitting their letters home, students responded to a survey about their choice between ChatGPT and the internet, and how that technology affected their ability to provide evidence of learning on this assessment. Twenty-nine students completed this assessment, and twenty-three chose to use ChatGPT while six chose an internet search. The twenty-three who chose to use ChatGPT responded to a survey about that particular technology, and the other six responded to a survey about their use of an internet search.  

I used qualitative analysis as recommended by Creswell (2013) to analyze their responses on both surveys (ChatGPT versus the internet) and identified emergent themes in the reasons they gave for their technology choice. The first survey prompt and summarized findings for each group are included in the tables below. 

Survey Prompt (for ChatGPT GroupEmergent Themes  
In this writing project you chose to use ChatGPT to support your learning and writing. Explain why you chose ChatGPT instead of an internet search for this writing project. Generative AI is easier and more efficient than an internet search (n = 10) 

Already using generative AI and am comfortable with it (n = 4) 

Have not used generative AI much and want to get better at using it (n = 4)

Generative AI gives better information than an internet search (n = 2) 
Survey Prompt (for Internet Search GroupEmergent Themes
In this writing project, you chose to use an internet search to support your learning and writing. Explain why you chose an internet search instead of ChatGPT for this writing project. Used to using an internet search or do not know how to use generative AI (n = 4) 

Generative AI is scary (n = 1) 

Themes mostly centered around familiarity with the technology and how students perceived their chosen technology would help them complete the assessment. 

How useful was AI to students (or not)?  

Other survey results are in table form that summarize ranked choices. Survey prompts were parallel in nature for both groups of students (ChatGPT versus internet search) with related findings allowing for comparisons on the role of each technology in this assessment.  

These results show some evidence for generative AI supporting student understanding of calculus, with an average response of 3.61 on a scale of 1-5 (1: very low and 5: very high ). This average is very close to the average found (3.66) for the same survey item when students completed this AI-SA in the previous semester.

Six students who chose to use an internet search responded to a parallel survey prompt with results summarized in the graph below.  

The average for these six responses was 3.33, and the mode was 4. These results provide some evidence that the internet search helped students’ understanding of calculus. It is worth noting that one student indicated 1: very low and no student indicated 5: very high regarding their internet search. This result contrasts with the summary for students who used ChatGPT with no students indicating 1: very low and four students indicating 5: very high.  

More Results 

The follow-up survey question on both surveys (ChatGPT versus internet search) asked students to briefly explain their answer choice. I analyzed their responses and identified emergent themes in the reasons they gave for their rankings. The highest frequency themes are summarized in the tables below. 

Emergent Themes (for ChatGPT Group ) Supporting Quote(s) from Follow-Up Survey Question 
ChatGPT helped or increased students’ understanding
(n = 11) 
“It explained concepts in a way I had not been previously exposed to.” 

“It gave helpful examples that I could follow along.” 

“It explained connections between the material.” 
ChatGPT reinforced what I already understood
(n = 6) 
“It reinforced the information I already knew from this unit.” 

“It explained things in a fancy way of what I already know, but it gave good information.” 

Other feedback included students noting how ChatGPT was more efficient than internet searches, and how quality prompt engineering is required to get good information from ChatGPT.  

Students who chose an internet search also explained their choice with themes summarized in the next table. 

Emergent Themes (for Internet Search Group) Supporting Quote(s) from Follow-Up Survey Question 
Internet reinforced what I already understood
(n = 3) 
“I had a good understanding of the topics and this research [using internet] just reaffirmed them for me.” 
Internet helped or increased students’ understanding
(n = 2) 
“It helped me develop a deeper understanding of the course material.” 

“It helped in applying concepts to real-life.” 

The themes students evidenced regarding the degree to which the internet and ChatGPT helped their understanding were similar. However, the internet group of students had the highest frequency for reinforced what I already understood instead of increased students’ understanding, whereas the ChatGPT group had the highest frequency for increased students’ understanding instead of reinforced what I already understood

How did AI help students write?  

The next survey question asked students about their chosen technology’s role in helping them with their writing.  

The average ranking for this survey item was 3.87, showing significant evidence that students perceived ChatGPT as helping with their writing. The average of 3.87 is higher than the average of 3.61, which summarized students’ perception of ChatGPT to support their understanding of calculus. Students who chose to use an internet search responded to a parallel survey prompt with results summarized in the graph below.  

The average for these six responses was 3.16 and the mode was 4. This offers some evidence that the internet search helped students in their writing. This average is lower than 3.33, which summarizes students’ perceptions of an internet search to support their understanding of calculus. 

More Results  

I conducted a thematic analysis on the follow-up survey question where students explained their ranking about technology helping in their writing. Highest frequency themes are summarized in the tables below. 

Emergent Themes (for ChatGPT GroupSupporting Quote(s) from Follow-Up Survey Question 
ChatGPT was only used for calculus concepts and examples
(n = 7) 
“It gave me formulas and basic definitions to review each concept in my paper.” 

“It gave me some deeper details I probably would have left out or forgotten about.” 
ChatGPT gave me an outline or structure for my letter
(n = 5) 
“It provided me with an outline for my letter and gave me ideas of what to focus on.”  

“It helped me structure my letter.” 
ChatGPT did most of the writing for me
(n = 5) 
“It helped me complete the writing portion very well because it accurately displayed all the information in the correct format with very few edits needed.” 

“It completed a lot of the writing portion for me.” 

Students’ explanations (identified in the first two themes above) often included a statement about how they did not use ChatGPT to write their letter for them. A couple of students also mentioned how ChatGPT helped them write for non-experts.  

Emergent Themes (for Internet Search Group) Supporting Quote(s) from Follow-Up Survey Question 
Internet helped with definitions and descriptions of content
(n = 2) 
“It gave descriptions of the topics and I rephrased it in a way I thought made more sense.” 

“It helped with finding the right words to explain the concepts.” 
Internet helped by giving examples of applications
(n = 2) 
“It gave some relevant examples of how the work we did [in our course] applied to real-world scenarios.” 

The themes students evidenced for the degree to which the internet helped in their writing directly relate to calculus content. Two students in this group indicated that the internet did not help them at all with their writing.  

This data is limited as it relates to one class with only six of the twenty-nine students choosing to use the internet as opposed to ChatGPT. However, students’ responses and explanations give a window into understanding their technological choices and how they perceive technology as beneficial. Many students are using generative AI prolifically in their college experience, while some are not using generative AI or are just beginning to use it. In general, college students see generative AI such as ChatGPT as beneficial due to increased efficiency and ease of use. A handful of these students recognized the importance of prompt engineering to produce quality information.  

Takeaways 

One interesting finding stood out regarding the average rankings on the two surveys. Students in the ChatGPT group perceived ChatGPT as helping more with their writing than with their understanding of calculus. Students in the internet group perceived an internet search as helping less with their writing and more with their understanding. While this finding is not shocking, it caused me to wonder what faculty could do to encourage the use of generative AI where developing understanding of disciplinary content is prioritized over assisting in writing.  

As we transition into the new year, I encourage you to consider how students’ use of generative AI impacts how they show evidence of meeting learning goals you have established. This consideration falls within the larger scope of assessment and feedback practices. I will continue to share my investigation of these practices in subsequent blog posts. 


References 

Creswell, John. 2013. Qualitative Inquiry and Research Design: Choosing Among Five Approaches (3rd ed). Thousand Oaks, CA: SAGE Publications, Inc. 

Eyler, Josh R. 2024. Failing Our Future: How Grades Harm Students, and What We Can Do About It. Baltimore: Johns Hopkins University Press.  


About the Author   

Aaron Trocki is an Associate Professor of Mathematics at Elon University. He is the CEL Scholar for 2023–2025 and is focusing on models of assessment and feedback outside of traditional grading assumptions and approaches.   

How to Cite this Post   

Trocki, Aaron. 2024. “Exploring Student Choice in Artificial Intelligence-Supported Assessments for Learning (Part 2).” Center for Engaged Learning (Blog), Elon University. January 17, 2025. https://www.centerforengagedlearning.org/exploring-student-choice-in-artificial-intelligence-supported-assessments-for-learning-part-2/.