Generative AI offers both potential and limitations for minoritized and disabled students. Recent publications show a clear tension: these tools can expand—making access and participation a reality for some learners who might have been excluded.  At the same time, generative tools have the potential to strengthen the very inequities they are said to lessen. When universities have a commitment to fairness and equity, the challenges around generative AI increase, as educators must ensure that the tools meaningfully support, instead of further marginalize, historically underserved students. 

Potential Benefits 

Scholars and pundits alike have praised generative AI as a tool to create adaptive or even personalized lessons. Unlike a traditional class with one teacher and several learners meeting together over the course of a semester, AI-driven systems can adjust the difficulty, add scaffolding, and try different modalities in real time. For students who need alternative approaches, whether because of cultural differences, inadequate education, or learning differences, this flexibility can matter. Scholars such as Judijanto (2025) and Seeley and Cournoyea (2025) emphasize that adaptive systems can offer those differentiated experiences at scale may be helping to overcome opportunity gaps. 

Generative AI also plays a growing role in bridging cultural and linguistic divides. Machine translation and dialect-aware communication tools reduce friction in classes where students have different approaches to language, supporting students who must navigate English as a new language. Research by Liang et al. (2024) and Cao et al.(2024) suggests that for these students, AI support can help them participate more in the learning because they feel confident that they have clear communication.  

A third factor is improving access to complex information. Generative AI models can give novices the capacity to interact with sophisticated scientific or policy simulations by asking questions in the vernacular. Giabbanelli et al. (2024) argue that this “approachability” might let students from historically excluded groups engage fully in complex decisions that matter in their lives.  

Persistent and Emerging Risks 

Still, there are risks, both in terms of information the AI model uses and in the way it determines what to return when answering a prompt. The information that generative models work from is the training data. Because generative models are trained on datasets that overrepresent Western, affluent, and white populations, they can reproduce stereotypes or deliver lower-quality outputs for marginalized users.  

The phenomenon of stereotype leakage is one example of how bias can move across languages and contexts, causing subtle but significant harm. Structural inequities also come into play, when we consider who has access to AI-supported learning. Most AI tools require the ability to access and use the tools of technology, and some systems work more smoothly when the users have high-speed Internet. Lack of skills or access can systematically impede low-income students. In turn, this can worsen digital divides.  

Decisions about access are often made by companies that offer AI systems, and this can cause its own issues. For example, students could have free access, but the faculty who might teach them responsible use may not.  

There are also ethical issues involving privacy and surveillance that may especially affect disabled and special education students, who already experience disproportionate monitoring. It’s difficult to impossible to know how AI systems work, and this matters for AI systems that infer emotional states or behavioral patterns. It raises questions about data governance and informed consent. If educators require AI system use, how might this affect learner privacy? Are these learners able to know about and consent to those risks? 

Finally, learners who come from different cultural backgrounds may fail to thrive in AI systems built on Western-centric assumptions about learning. For students from collectivist or Indigenous traditions, tools that privilege individualistic, efficiency-driven models may feel alienating or irrelevant. If AI tutors are used to replace educators who understand and adapt the modes of instruction for the learners they have, this creates a risk of serving some students poorly.  

Generative AI tools offer some interesting possibilities, and are not inherently harmful, but it’s important to consider the full spectrum of learners and their needs. For educators, the job is to consider how these tools might be used in a culturally responsive and accountable way that selects for equity rather than settling for exclusion. 


References 

Brown, Lydia X. Z., Ridhi Shetty, Matt Scherer, and Andrew Crawford. 2022. “Ableism And Disability Discrimination In New Surveillance Technologies: How New Surveillance Technologies in Education, Policing, Health Care, and the Workplace Disproportionately Harm Disabled People.” Center for Democracy and Technologyhttps://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how-new-surveillance-technologies-in-education-policing-health-care-and-the-workplace-disproportionately-harm-disabled-people/

Ghotbi, Nader. 2022. “The Ethics of Emotional Artificial Intelligence: A Mixed Method Analysis.” Asian Bioethics Review. 15 (4): 417–430. https://doi.org/10.1007/s41649-022-00237-y

Giabbanelli, Philippe J., Jose J. Padilla, and Ameeta Agrawal. 2025. “Broadening Access to Simulations for End-Users via Large Language Models: Challenges and Opportunities.” WSC ’24: Proceedings of the Winter Simulation Conference. 2535-2546. https://dl.acm.org/doi/10.5555/3712729.3712939

Judijanto, Loso. 2025. ”Beyond Access: Cultural, Ethical, and Infrastructural Challenges of AI in Marginalised Education Contexts.” European Journal of Contemporary Education and E-Learning. 3 (6): 83-98. https://doi.org/10.59324/ejceel.2025.3(6).07

Liang Zhaokai, Liu Yuexi, and Luo Yihao. 2024. “The Application of Large Language Models Reducing Cultural Barriers in International Trade: A Perspective from Cultural Conflicts, Potential and Obstacles.” Lecture Notes in Education Psychology and Public Media. 47: 33-37. https://doi.org/10.54254/2753-7048/47/20240875

Nwatu, Joan, Oana Ignat, and Rada Mihalcea. 2023. “Bridging the Digital Divide: Performance Variation across Socio-Economic Factors in Vision-Language Models.” arxivhttps://doi.org/10.48550/arXiv.2311.05746

Seeley, Sarah and Michael Cournoyea. 2025. “ ‘I’m Not Worried about Robots Taking Over the World. I Guess I’m Worried about People’: Emoting, Teaching, and Learning with Generative AI.” Teaching & Learning Inquiry 13: 1–19. https://doi.org/10.20343/teachlearninqu.13.43 

Sturgill, Amanda. 2025. “AI and Learning About Cultures.” Center for Engaged Learning (blog). Elon University, July 8, 2025. https://www.centerforengagedlearning.org/ai-and-learning-about-cultures

Yang Trista Cao, Anna Sotnikova, Jieyu Zhao, Linda X. Zou, Rachel Rudinger, and Hal Daume III. 2024. “Multilingual Large Language Models Leak Human Stereotypes Across Language Boundaries.” arxivhttps://doi.org/10.48550/arXiv.2312.07141


About the Author 

Amanda Sturgill, associate professor of journalism, is the 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience

How To Cite This Post 

Sturgill, Amanda. 2026. “Generative AI and Non-Majority Students: Risks and Benefits.” Center for Engaged Learning (blog). Elon University. February 3, 2026. URL.