HomeBlogCEL Scholar AI and Learning About Cultures by Amanda SturgillJuly 8, 2025 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Signature Work Student Leadership Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog Figure 1. Image created with ChatGPT with the following prompt: Black and white line art image of two people sitting across a table from each other. One is a young adult female dressed casually, and the other is an older man wearing Bavarian traditional dress including hat. They are holding and talking across a tin can telephone. Several of the high-impact practices in higher education require learners to practice stepping out of their cultural comfort zone. For instance, a co-op or internship can require adapting to the culture of a workplace. Undergraduate research means learning to understand the cultures of an individual field as well as that of a larger community of inquiry. Community-based learning can require learners to engage with differences in their communities or in the community where the university is located. And, of course, international study means that in some cases, students are immersed in a rather different culture from the familiar, sometimes without readily available support from the college or university. Opportunities Tools powered by large language models (LLMs) might be one solution for supporting these transitions. For example, a student might practice language with a chatbot and have the opportunity to learn some lessons about a target culture along the way. Another exciting possibility is creating a cultural simulation, where the learner can converse with a prompted LLM, acting as a local and answering questions. This could work with a simulated lab director, community partner, or internship supervisor. In these cases, the LLM could give simulated access to a cultural informant. I tried this out, giving ChatGPT the following prompt: “Act as a citizen of Berlin, Germany. I am a college student preparing to study abroad there and I will ask you, in English, questions about the people and culture. Answer them as that citizen and answer in English, unless I ask you to say something in German. When you are ready for the first question, say ‘Ready’.” We went through several sets of questions and answers. I asked questions about German rules on jaywalking and detailed recycling. I asked about tips for getting into Berlin’s legendary dance clubs. I also asked questions about legacies of World War II and if I was likely to see Nazis. (This is a question I personally would likely not ask a casual acquaintance in the country, but I could definitely see my students wondering about it!) I mostly got useful answers that do align with what learned through my own study. The answers had detail that was helpful and felt targeted to an American student. There were limitations, though. In particular, the answers were given as if the respondent primarily wanted to make the questioner comfortable. This seemed like an extension of the people-pleasing attitude LLMs have struggled with, but could give me an inaccurate impression about the cultural role of small talk in some places. A question about getting into clubs gave the same tips that are available in English language media articles, but did assume that the questioner would already know that some Berlin clubs are notoriously difficult to get into. The answer from the LLM made it seem like if you followed two or three easy practices, it would not be a problem. The replies about social rules such as not jaywalking and to the questions about Nazis both got full answers. I noticed that the tone of the answers would lead me to believe that these would be appropriate topics to ask to a casual German acquaintance. But I didn’t get much nuance about how I might actually engage with German citizens on questions like this. For that kind of nuance, videos like this or from posts like this one on Reddit, the social forum, could be more informative, though the second is satire, which could be difficult for some to understand. With careful prompting, LLMs are making high-level simulations more accessible to non-experts (Giabbanelli 2023) and have been used to replicate how people search for information online (Zhang et al. 2024). In higher education, these models are expanding possibilities—whether it’s generating diverse simulated users to aid product development (Ataei et al. 2024), supporting language classrooms by creating materials and classroom activities that encourage collaboration among students (Bonner, Lege, and Frazier 2023), or simulating realistic classroom interactions between teachers and students (Zhang et al. 2024). Challenges These possibilities are exciting, but one potential challenge is a phenomenon called stereotype leakage. Some LLMs are trained on data from multiple languages, and the perspectives from training data can transfer into responses in a different language or from a different culture (Cao et al. 2023). This can be positive or negative. For example, if in the English language training data you have a negative perspective on feminists, then this can manifest in responses in a different language even if that perspective wouldn’t be accurate in the response culture. Similarly, if you have a stereotype like the model minority from English, it might transfer to answers in another language. The aphorism that AI is only as good as its training data can have some memorable consequences when you are constructing cross-cultural simulations (Jenks 2025). This flaw in simulations can create misrepresent cultural dynamics, portraying groups based on leaked bias instead of reality. Simulations could provide valuable practice in a safe space, but because leaked biases can reinforce harmful ideas, it may be inappropriate to use simulations to promote understanding and empathy. References Ataei, Mohammadmehdi, Hyunmin Cheong, Daniele Grandi, Ye Wang, Nigel Morris, and Alexander Tessier. 2024. “Elicitron: An LLM Agent-Based Simulation Framework for Design Requirements Elicitation.” arXiv. https://arxiv.org/abs/2404.16045. Bonner, Euan, Ryan Lege, and Erin Frazier. 2023. “Large Language Model-Based Artificial Intelligence in the Language Classroom: Practical Ideas for Teaching.” Teaching English with Technology 23 (1): 23–41. https://doi.org/10.56297/BKAM1691/WIEO1749. Cao, Yang Trista, Anna Sotnikova, Jieyu Zhao, Linda X. Zou, Rachel Rudinger, and Hal Daume III. 2023. “Multilingual Large Language Models Leak Human Stereotypes Across Language Boundaries.” arXiv. https://doi.org/10.48550/arXiv.2312.07141. Giabbanelli, Philippe J. 2023. “GPT-Based Models Meet Simulation: How to Efficiently Use Large-Scale Pre-Trained Language Models across Simulation Tasks.” In 2023 Winter Simulation Conference (WSC), 2920–2931. IEEE. https://doi.org/10.48550/arXiv.2306.13679. Jenks, Christopher J. 2025. “Communicating the Cultural Other: Trust and Bias in Generative AI and Large Language Models.” Applied Linguistics Review 16 (2): 787-795. https://doi.org/10.1515/applirev-2024-0196. Richard, Isaiah. 2025. “OpenAI Rolls Back ChatGPT Upgrade That Made It ‘Too Nice’ After It Sparks Memes, Controversy.” TechTimes. https://www.techtimes.com/articles/310177/20250429/openai-rolls-back-chatgpt-upgrade-that-made-it-too-nice-after-it-sparks-memes-controversy.htm. Zhang, Erhan, Xingzhu Wang, Peiyuan Gong, Yankai Lin, and Jiaxin Mao. 2024. “USimAgent: Large Language Models for Simulating Search Users.” In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’24), 2687–2692. New York: Association for Computing Machinery. https://doi.org/10.1145/3626772.3657963. About the Author Amanda Sturgill, associate professor of journalism, is a 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience. How to Cite this Post Sturgill, Amanda. 2025. “AI and Learning About Cultures.” Center for Engaged Learning (blog). July 8, 2025. https://www.centerforengagedlearning.org/ai-and-learning-about-cultures.