HomeBlogSupporting Neurodivergent and Physically Disabled Students How Might Generative AI Impact DEI in University Classes by Amanda SturgillMay 6, 2025 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Student Leadership Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog I sometimes describe the content that generative AI creates as pumpkin spice latte content, to get a giggle from my students. I think the analogy holds up: It’s tasty. It’s filling. And it is absolutely not anything that will surprise and delight. That’s because it’s basic. It’s what’s expected. “Expected” can be a difficult term, and when it comes to incorporating diverse perspectives, what’s expected depends on the data it was trained with. When the data has a strong majority perspective, that’s what an AI is likely to return. What does this mean for higher education? Algorithmic bias can occur when AI systems reflect or amplify existing societal biases, such as racial, gender, or socioeconomic disparities, often due to biased data or flawed design (NIST 2019). Considering Bias Research shows that AI can help make education more inclusive, but only if it’s used responsibly. Pawar and Khose (2024) and Roshanaei et al. (2023) note that AI has the potential to personalize learning and possibly support underrepresented students. They also, however, see challenges like algorithmic bias, privacy concerns, and unequal access to technology. To address these issues, researchers have suggested ways to make AI more ethical when thinking about diversity. Darwish et al. (2024) start with training, suggesting making a point to train models on diverse datasets and monitor the outputs for bias. Even the planning for improvements can be improved by intentionally inviting diverse voices to be part of model-creating teams. That’s a good start from the model perspective. Viberg et al. (2024) add that the user perspective also matters. We should consider the cultural and social context when using AI, they suggest, because it’s not just about the technology, but about the users. Benefits of AI for DEI AI can make classrooms more inclusive in some exciting ways. AI chatbots can offer students a private space for exploring identity questions, particularly socially challenging ones like race, gender or disability. A student might ask an AI about LGBTQ+ history or how to self-advocate as a person with a disability, without worrying that others will judge them. In this way, AI could be a helpful tool for self-discovery. For example, AI can be used for roleplaying. Students in a nursing class might practice conversations with an AI that simulates a patient with a different cultural or life history. The students can test their communication skills in a low-pressure environment before working with real people. By trying out different approaches, students can learn what works and can build confidence. AI can also help instructors design inclusive educational materials. A faculty member might use AI to simplify the language in an assignment or to identify idioms, dated references, or sarcasm that might not communicate well with everyone. This can, in turn, make it easier for students for whom English is a second language or those who are neurodivergent to understand. AI can offer ways to make visual aids more accessible, such as suggesting alternatives for colorblind students. These kinds of alterations can enhance learning for all. Risks of AI for DEI AI also comes with risks, particularly for students from marginalized groups. As I mentioned earlier, AI responses are likely to reflect the biases in its training data. Because most commonly available generative AI models were trained on text taken from the Internet, they can have a range of the stereotypes and prejudices that have dominated Internet discussions. The AI can then repeat and reinforce those prejudices. When a student uses AI as a search engine, which my students commonly do, asking an AI for a historical summary might get a response that leaves out key contributions from women or people of color. These kinds of interactions can cause harm, especially when students assume the AI is a quality information source. It can be hard to notice these biases in AI outputs. The same critical thinking skills that universities teach for other kinds of information must apply to AI-generated information. Strategies for mitigating bias can include promoting open-mindedness and raising awareness of bias. Metacognitive skills and reflective practices are crucial (Royce et al. 2019). Some scholars advocate for considering many different ways of thinking about a problem and stepping back to analyze how AI and human thought patterns develop and influence each other (Cardinal 2022). AI users practicing critical thinking may reflect on how they could have thought differently, training themselves to recognize and fix their own thinking errors by improving at observing and controlling their thought processes (Zenker 2014; Maynes 2015). Accessibility and AI Accessibility also matters when incorporating AI tools. AI can help include students with disabilities, but only if it’s designed and used with care. For example, AI tools that simplify text or summarize readings can help students with reading issues by breaking down complex material. AI-powered transcription tools can also support students who need captions on audio content. Features like this help lower barriers. Not every AI tool is accessible, though. Some chat interfaces don’t work well with screen readers, and others create images that don’t work for students with vision impairments. Content created with AI must always be checked over to ensure both accurate information and accessible formatting, with features such as alt-text and headings potentially needing to be added in manually (Cornell n.d.). Teachers need to allow time to make sure AI tools are carefully vetted before choosing to use them. It’s useful to ask the students themselves and to have planned alternatives if a tool is not suitable. Tips Here are some tips for incorporating AI with a DEI lens: Review AI outputs: Instructors should inspect AI-generated output to make sure it is inclusive and free of bias. Teach students to think critically: Students should learn to question AI output as well. Model asking “Whose voices are missing?” or “Does this reflect bias?” Encourage reflection: Both students and teachers should reflect on what happened when they used AI. Did the tool work like they expected? Did you find bias or other problems? References Cardinal, Monique. 2022. “Understanding Factors of Bias in the Way Information Is Processed Through a Metamemetic and Multilogical Approach.” Intelligent Information Management 14: 80–91. https://doi.org/10.4236/iim.2022.142006. Cornell University. “AI & Accessibility.” n.d. https://teaching.cornell.edu/generative-artificial-intelligence/ai-accessibility. Darwish, Sara, Alison Bragaw-Butler, Paul Marcelli, and Kaylee Gassner. 2024. “Diversity, Equity, and Inclusion, and the Deployment of Artificial Intelligence Within the Department of Defense.” Proceedings of the AAAI Symposium Series 3 (1): 348–53. https://doi.org/10.1609/aaaiss.v3i1.31233. Maynes, Jeffrey. 2015. “Critical Thinking and Cognitive Bias.” Informal Logic 35: 183–203. https://doi.org/10.22329/il.v35i2.4187. NIST. 2019. “Face Recognition Vendor Test (FRVT): Part 3: Demographic Effects.” National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280. Pawar, Gitanjali, and Jaydip Khose. 2024. “Exploring the Role of Artificial Intelligence in Enhancing Equity and Inclusion in Education.” International Journal of Innovative Science and Research Technology (IJISRT), 2180–85. https://doi.org/10.38124/ijisrt/IJISRT24APR1939. Roshanaei, Maryam, Hanna Olivares, and Rafael Rangel Lopez. 2023. “Harnessing AI to Foster Equity in Education: Opportunities, Challenges, and Emerging Strategies.” Journal of Intelligent Learning Systems and Applications 15 (4): 123–43. https://doi.org/10.4236/jilsa.2023.154009. Royce, Celeste S., Margaret M. Hayes, and Richard M. Schwartzstein. 2019. “Teaching Critical Thinking: A Case for Instruction in Cognitive Biases to Reduce Diagnostic Errors and Improve Patient Safety.” Academic Medicine, 187–94. https://www.sap2.org.ar/i2/archivos/189.pdf. Viberg, Olga, René F. Kizilcec, Alyssa Friend Wise, Ioana Jivet, and Nia Nixon. 2024. “Advancing Equity and Inclusion in Educational Practices With AI‐powered Educational Decision Support Systems (AI‐EDSS).” British Journal of Educational Technology, July. https://doi.org/10.1111/bjet.13507. Zenker, Frank. 2014. “Know Thy Biases! Bringing Argumentative Virtues to the Classroom.” Ontario Society for the Study of Argumentation. https://scholar.uwindsor.ca/ossaarchive/OSSA10/papersandcommentaries/191/. Additional Reading Ali, Wael, Rachid Alami, Mohammad A. K. Alsmairat, and Turki Al Masaeid. 2024. “Consensus or Controversy: Examining AI’s Impact on Academic Integrity, Student Learning, and Inclusivity Within Higher Education Environments.” In 2024 2nd International Conference on Cyber Resilience (ICCR), 1–5. https://doi.org/10.1109/ICCR61006.2024.10532968. Dieterle, Edward, Chris Dede, and Michael Walker. 2022. “The Cyclical Ethical Effects of Using Artificial Intelligence in Education.” AI & Society 39 (2): 633–43. https://doi.org/10.1007/s00146-022-01497-w. Smolansky, Adele, Huy A. Nguyen, Rene F. Kizilcec, and Bruce M. McLaren. 2023. “Equity, Diversity, and Inclusion in Educational Technology Research and Development.” In Communications in Computer and Information Science, 57–62. https://doi.org/10.1007/978-3-031-36336-8_8. About the Author Amanda Sturgill, associate professor of journalism, is the 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience. How to Cite This Post Sturgill, Amanda. 2025. “How Might Generative AI Impact DEI in University Classes.” Center for Engaged Learning (blog). May 6, 2025. https://www.centerforengagedlearning.org/how-might-generative-ai-impact-dei-in-university-classes.