HomeBlogData Literacy Data Literacy as a Precursor to AI Literacy by Cora WiggerOctober 28, 2025 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Data Literacy Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Signature Work Student Leadership Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog As the new school year begins and our weeks are filled with slide edits, planning meetings, and the smell of new notebooks, one theme is dominating the preparation: Artificial Intelligence. I am not an AI expert. I have no tips and tricks for integrating large language models (LLMs) into your course tools or even anything particularly novel about how to discourage LLM use. What I do understand is how to ask questions of data and analysis methods, and the growth in AI calls on us to incorporate those skills in new ways. In my work with Dr. Amanda Kleintop, we are focusing on how to engage students in data literacy that promotes a deeper understanding of equity. Instead of separating AI literacy and data literacy, we see data literacy as a key precursor to AI literacy (Koltay 2024). From our perspective, the critical interrogation of data—whether qualitative or quantitative, primary or secondary source—is a huge component of this precursor. As Koltay (2024) writes, “AI literacy relies on the ability to understand the intrinsic nature of data and to use this understanding to inform decision-making processes” (45). Consequences of AI Illiteracy The importance of data literacy for AI literacy goes beyond responsible use and crosses into ethics and equity. Johnson (2025) argues in The Review of Black Political Economy that current algorithm-based AI tools are not capable of being “equitable and racially just.” His argument, which I will do my best to partially summarize (without AI), is that the data that large language models are currently trained on is racially biased due to historical inequities and therefore reflects historical racial injustice. These models then run the risk of reinforcing racially biased “solutions” to problems of optimization, particularly when we can’t clearly see what is being optimized. These issues can arise from the data the models are trained with and from the “opacity” of the models themselves. In essence, the biases—and potential for harm—embedded in AI are not always clear. Johnson proposes many solutions to this problem, almost all of which fall under the umbrella of “regulatory oversight.” But his other call, which Johnson weaves throughout his analysis as a necessary but insufficient condition for progress, is something we can all work towards: thoughtfulness. For those of us in the classroom, rather than on regulatory oversight committees, we can focus on modeling and teaching students the importance of asking questions to better understand bias. What (and who) does the data and information the algorithm is using include and leave out? What kinds of assumptions do the models rely on to draw their conclusions, and how convinced are you of those assumptions? Is this a reliable source? The Reality of Unregulated AI Usage Some of this may feel obvious, but that doesn’t mean that it’s being done in practice. A March 2025 report from McKinsey on the use of AI in organizations around the world found that—among those whose organizations regularly use generative AI—only 27 percent of respondents said that their organization reviews all their AI output before use. The vast majority (63 percent) reported that their organizations review 80 percent or less of their AI output before use. Just under a third said that less than 20 percent of the output was reviewed. There are more questions to ask of the McKinsey data and analysis to understand what these numbers can tell us, only some of which I could find the answers to. McKinsey reports that they received 1,491 responses from “participants at all levels of the organization,” which I take to mean that there could be varying levels of knowledge of how much output is actually reviewed in practice amongst respondents. In fact, the shares McKinsey reported remove those who responded, “I don’t know” but do not report the share that responded, “I don’t know.” After setting aside respondents whose organizations don’t use GenAI and those who responded, “I don’t know,” they were left with 830 responses to use to calculate the statistics above. I was also unclear on McKinsey’s sampling method, which gives me pause when thinking about who to generalize these results to. It’s also important to note that the data was collected in July 2024, even though the report came out in March 2025. Given the rapid change in the state of GenAI usage, these numbers are likely already outdated. What I do feel like we can reliably learn from this data, however, is that it is not a foregone conclusion that all GenAI output is being reviewed before being put to use. While exactly what share of companies and organizations are reviewing what share of AI output is probably a bit wiggly to pin down, it seems reasonable to operate under the assumption that a good chunk of output is not being reviewed before being put to use. Combatting Ignorance with Thoughtfulness This brings us back to Johnson’s call for thoughtfulness. I argue that we can instill that thoughtfulness in students by encouraging data literacy that’s focused on asking questions from the outset. Johnson argues that algorithms “become problematic when they become embedded assumptions not appropriately reviewed by humans,” leading to a “problematic default option” (305). Whether the default option comes from a common data source, a statistical technique, an author, or an algorithm, the solution as teachers and learners begins the same way: Ask questions. Follow the train of thought or a source. Be willing to make a judgement for yourself. To be honest, Amanda and I may never again explicitly talk about AI as we continue to write about data literacy in engaged learning. Plenty of good folks are doing that work. But that doesn’t mean we have our heads in the sand. We hope that we can contribute to developing data literate students on their way to being AI literate. Ultimately, we want to build on data literacy, not bypass it. Acknowledgements The content in this post would not exist were it not for the many ongoing conversations on the topic with friends, family, and colleagues, and particularly those with my CEL collaborator Dr. Amanda Kleintop, as well as a plethora of resources shared by Jeremy Needle. References Center for Engaged Learning. 2025. “AI and Engaged Learning.” Center for Engaged Learning (blog). https://www.centerforengagedlearning.org/category/ai-and-engaged-learning/. Johnson, Daniel K. N. 2025. “Gaslighting Ourselves: Racial Challenges of Artificial Intelligence in Economics and Finance Applications.” The Review of Black Political Economy 52 (3): 303–35. https://doi.org/10.1177/00346446251331615. Kleintop, Amanda, and Cora Wigger. 2025. “Data Literacy in Engaged Learning: Understanding Bias.” Center for Engaged Learning (blog). https://www.centerforengagedlearning.org/data-literacy-in-engaged-learning-understanding-bias/. Koltay, Tibor. 2025. “From Data Literacy to Artificial Intelligence Literacy: Background and Approaches.” Central European Library and Information Sciences Review (CELISR) 2 (1). https://doi.org/10.3311/celisr.38042. Singla, Alex, Alexander Sukharevsky, Lareina Yee, Michael Chui, and Bryce Hall. 2025. “The State of AI: How Organizations Are Rewiring to Capture Value.” McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai. Wigger, Cora. 2025. “Learning Your Data: Teaching with Data Biographies.” Center for Engaged Learning (blog). https://www.centerforengagedlearning.org/learning-your-data-teaching-with-data-biographies/. About the Author Cora Wigger is an assistant professor of economics and a 2025–2027 CEL Scholar. Her research focuses on the intersections of education and housing policy, with an emphasis on racial inequality and desegregation. At Elon, she teaches statistics and data-driven courses and contributes to equity-centered initiatives like the “Quant4What? Collective” and the Data Nexus Faculty Advisory Committee. How to Cite This Post Wigger, Cora. 2025. “Data Literacy as a Precursor to AI Literacy.” Center for Engaged Learning (blog). October 28, 2025. https://www.centerforengagedlearning.org/data-literacy-as-a-precursor-to-ai-literacy.