HomeBlogCEL Scholar Assessment in the Upside Down: Academic AI with Students as the Audienceby Amanda SturgillApril 21, 2026 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Data Literacy Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Signature Work Student Leadership Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog The image was created with ChatGPT with the following prompt: Create a photorealistic image of a young man wearing a hoodie, sitting at a desk handing a pen to a robot. They are sitting at a desk in a somewhat messy, small college dorm room. Recently, I downloaded Grammarly’s 2025–26 AI Trends Report. It had an interesting statement in its introduction: “Higher education is no longer at the beginning of its AI journey, but clarity of direction is still emerging” (2). Given other things I have been reading this week about how this year’s freshmen is the last group of students to enter college with experience of academic work before AI was released publicly (Bogost 2025), I think we’ve got quite a contrast going. Academics have invested a lot of time and ink thinking about what we expect of students in an age of generative Artificial Intelligence. We may also need to think about what students expect from us. I have seen students point out, with disdain, when a colleague used apparently AI-generated images on a slide in class. I have heard students express concern about faculty members grading their work with AI. You can find countless Reddit posts where students decry the hypocrisy of faculty disallowing or criminalizing AI use, when faculty evidently use it themselves. I have even seen complaints about this on university parent message groups on social media. At the same time, it feels a little bit like it did during the early COVID-19 pandemic shutdowns. Educational technology companies like Grammarly are embedding AI into their tools, and I am getting regular invitations to webinars, workshops, and trials so that a particular company’s tool can get that early market share that might lead to later growth and success. The report from Grammarly was one such effort, which is broken down below. Inverted Bloom’s Taxonomy and AI-Assisted Creation Its premise is that AI is here, and it’s our job to teach students to work with it. The paper then moves through a few examples of how this might play out. One of the more interesting projections is a suggestion from Michelle Kassorla (2026, 5) that generative AI inverts the order of Bloom’s Taxonomy (figure 1). Figure 1. Bloom’s Taxonomy (2001 revision). Moving from the bottom to the top of the pyramid represents a shift from lower-order thinking to higher-order thinking. Source: Creative Commons Attribution license, Vanderbilt University Center for Teaching, Bloom’s Taxonomy Instead of moving from recollection of facts through analysis, evaluation and eventually creation, the student would create first (with AI), then move to evaluating the creation, then analyzing what made it suitable, etc. (see figure 2). Kassorla states that the inverted Bloom’s Taxonomy can “prepare our students for careers in a world filled with AI” (5). Enacting this inversion requires that faculty think of the AI draft as the “raw material” for students to revise. Figure 2. Inverted Bloom’s Taxonomy with AI.Source: Grammarly, 2025–26 AI Trends Report (San Francisco: Grammarly, 2025), 5. Reproduced by screenshot for educational and commentary purposes. This approach has some promise from the perspective of the types of skills students might actually need at work. Some studies suggest students see the work generative AI produces as superior to what they could produce, at least in some ways (Salwa and Tyas 2024 ; Alsalmi, Al-Waaili, and Zeki 2025). If this is true, this process might help them to check their optimism, a finding suggested by other authors (Cedrone 2025). Limits of AI-First Learning Still, I have questions. The biggest issue I see is that a novice to a content area is not going to be able to adequately assess the quality of possibly truthy work. In this case, the apparent quality of writing might obfuscate the quality of thinking, and it would be difficult for a novice to even know what are good questions to ask. For example, AI might be great at writing a set of instructions for a physical therapy patient. The student could take those instructions and try to determine if they are good, but that is a deeply complex task. It requires basic knowledge about the human body and how it works. It includes knowledge about how patients might receive knowledge in that written form. It includes skills in producing written content. Are all of those things going to be learned in the context of doing this assignment? If so, what would your audience (in this case, the learners), think about completing this? When you start with a document that looks pretty good, are you motivated to exploit it to learn new content and skills? A second piece in the report from Nick Potkalitsky (2026, 7) offers a few clues. Students were asked to interrogate their own technology use in the face of choices. This was, though, not as much a case of thinking with AI as it was thinking about AI. The Role of Assessment The third offering in the report from Jason Gulya (2026, 10) sharpens this idea of audience choice by focusing on the grading aspect. It’s not a surprise to anyone in higher education that the external motivator of a grade is sometimes the paramount student interest. When a tool can get the grade, mission accomplished. Gulya suggests that to master work with AI, students need to be guided to focus on the process by which they do their work and to assess their own learning process. Personally, I’m afraid this will turn into “ChatGPT, write me a paragraph of things that are good about this paper and why it deserves an A.” Learning as Process Optimistically, the techniques of ungrading (see Blum 2020) may offer some paths forward here. I think this portion of the paper gets at the heart of the audience question when it comes to students: purpose. The purpose may be external, as in Kassorla’s focus on career preparation, or internal as Gulya suggests faculty cultivate. But in any case, it can be a useful framework to keep students engaged. References Alsalmi, Hamed, Mahmood Al-Waaili, and Akram M. Zeki. 2025. “Artificial Intelligence in Higher Education: A Study of Benefits, Challenges, and Academic Integrity Concerns.” Paper presented at the 2025 10th International Conference on Information and Communication Technology for the Muslim World (ICT4M), Kuala Lumpur, Malaysia, November 26–27. https://doi.org/10.1109/ICT4M68001.2025.11363508. Blum, Susan D., ed. 2020. Ungrading: Why Rating Students Undermines Learning (and What to Do Instead). Morgantown, WV: West Virginia University Press. Cedrone, Amy M. 2025. “Generative AI Use in a Business Ethics Course Assignment: A Descriptive Study on Student AI Choice and Perceptions.” Teaching and Learning Excellence through Scholarship 5 (1). https://doi.org/10.52938/tales.v5i1.3491. Grammarly. 2025. 2025–26 AI Trends Report. San Francisco: Grammarly. https://downloads.ctfassets.net/1e6ajr2k4140/4pkhlb5YUKH6Xb7T9jTMSf/373cb2ec588547f055c8f2d4b66869f7/Trends_Report__2_.pdf. Salwa, Athiyah, and Novita Kusumaning Tyas. 2024. “Exploring Students’ Perception of EFL on the Use of ChatGPT to Complete the English Writing Task.” Berumpun: International Journal of Social, Politics, and Humanities 7 (1): 80–92. https://doi.org/10.33019/berumpun.v7i1.186. About the Author Amanda Sturgill, associate professor of journalism, is the 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience. How to Cite This Post Sturgill, Amanda. 2026. “Assessment in the Upside Down: Academic AI with Students as the Audience.” Center for Engaged Learning (blog). Elon University. April 21, 2026. https://www.centerforengagedlearning.org/assessment-in-the-upside-down-academic-ai-with-students-as-the-audience.