HomeBlogCEL Scholar Data Literacy in Engaged Learning: Understanding Biasby Amanda Kleintop and Cora WiggerJuly 22, 2025 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Signature Work Student Leadership Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog Over my career as a student and professor, I (Amanda) have come to learn, with the help of my colleague Dr. Cora Wigger that understanding bias in research is essential to understanding causes and perpetuation of racism. In my first undergraduate History course, a first-year seminar on Southern History taught by the president of the university, my professor Dr. Ed Ayers banned the word “biased” from the classroom. After one student critiqued an author as “biased,” the professor corrected us: everything is biased. Not only is bias an insufficient critique of any source, but people often use the word to dismiss a source and avoid more useful, critical interrogation. Eighteen years later, as a professor of History, I haven’t yet committed to banning the word from my classrooms. But I’ve been tempted. My professor’s challenge forced me as a student to be skeptical and learn to find other, more relevant critiques of sources—and all data. Since all data is biased, I remind my students, historians have to ask who the author of a source is and their motivation in creating the source. Then, we can understand the extent and limits of a source, and determine what one source from one perspective can reliably tell us about the past. Bias in Methods I told this story to Dr. Cora Wigger, my CEL Scholar collaborator for 2025-2027, after our first semester teaching at Elon University in the fall of 2022. At that point, I had taught history courses for four years full-time, and I had explained this perspective on bias to students almost every semester in my intro-level survey courses covering US history. Cora is an applied microeconomist, meaning her research and teaching emphasize quantitative data analysis about the social world. While I use bias in the more generally understood sense to mean “not objective,” quantitative social scientists regularly use it to refer to whether an analysis tool or model can reliably estimate what we’re trying to measure on average. The classic metaphor for this form of bias is to picture a dart board. You may not be particularly good at darts, but if you throw 100 darts at the board, they should cluster around the bullseye, meaning that your throws are an unbiased strategy for hitting the center. Any individual dart throw may not hit the bullseye, but because of random error rather than systematic error. If they’re clustering around a different point, it means that your aim is systematically off – or, in this language, biased. In this language, how close the darts cluster together is referred to as “precision,” and where they cluster is an indicator of whether the method itself is biased. Figure 1. Author’s depiction of the classic metaphor of bias versus precision picturing darts on a dart board. Unbiased estimators (left two examples) are those that are correct (hitting the bullseye) on average, whereas biased estimators (right two examples) cluster around a different point than the bullseye, on average. A google image search for “bias versus precision” yields many similar depictions. Within my first few years of teaching, I (Cora), too, came close to banning the word bias as a standalone critique. As most researchers who use statistical methods know, nailing down analysis strategies that we can reasonably expect to be unbiased, on average, is exceedingly hard in applied, real-world scenarios. I teach an entire course on causal inference where students become skilled at naming the different ways that bias shows up in the methods of estimating effects. And yet, in critiquing applied economics research, a common lazy refrain is that the models are “biased,” without explanation as to why, which in turn seeks to dismiss rather than adjust our strategies or our interpretations. Human Bias and Methodological Bias Intersect While our classroom definitions of bias mean different things, they intersect more than they may appear. Teaching history highlights how human bias, perspective, and subjectivity shape what information we have available, how it’s presented, and potentially how it’s quantified. While teaching economics, a definition of systemic methodological bias must include an understanding of where the data comes from and in what ways the human creation behind it could carry through the analysis process to lead to conclusions that could be systemically off the bullseye. Further, what data is available to us as researchers serves as a major source of what we are and are not able to “control for” in our models. In turn, data availability shapes the usefulness and accuracy of those models. Although we teach students with different forms of data—qualitative, archival sources, and quantitative datasets—it turns out that we want students to understand similar principles of data literacy. While bonding over pedagogy brainstorming sessions, we both saw that students were less able to understand the content we taught around issues of racial inequality because their previous engagement with both historical primary sources and quantitative data analysis failed to account for the inherently biased, human origins of data and tools. In the data that our students use, human prejudice is systemic. In Amanda’s field, surviving primary sources in US history tend to be created by the white, the wealthy, and the state. For Cora, the large-scale datasets available to us are a result of what governments and institutions have chosen to prioritize measuring and therefore reflect values of these systems. We want students to interrogate their data and see how these origins matter for interpretation. While methods can be biased, often it’s the human methodological choices, data collection, and the mere availability of data that lead to issues of bias. These root causes come from human prejudice —whether we’re cognizant of it or not—which shapes what we think of and what’s available to us. Often, the inequities that created or shaped our data continue to shape research and decision-making based on that data. Understanding Bias to Improve Data Literacy and Equity Our CEL Scholar research is rooted in the hypothesis that it is essential for our students to understand how human agency and systemic inequities interact to create “biased” data and analysis. In practice, understanding the reliability of data and moving past the lazy critique to the intricate, informed, nuanced one is important for all consumers of social data, from the humanities to the social sciences and beyond. These are skills that we strive to teach students in our history and economics courses. Instead of saying “it’s biased,” we want students to be able to explain how something is biased, how we can learn from it, and how to use that knowledge to take steps to keep improving upon our understandings of the world. We’ll use our time and blog posts as CEL Scholars to explore other scholarship, pedagogy, and methods that help students understand the process of data creation and analytics through our interdisciplinary contexts. We’ll explore, among other things: Definitions of data literacy for our study and the classroom. Methods to teach data equity through interdisciplinary approaches and with high-impact practices, such as integration of our research into undergraduate courses and mentored research. Teaching tools and lesson plans with freely available digital humanities and social sciences resources. Sometimes we’ll write together, sometimes separately, and sometimes we’ll invite guest authors. These blog posts will help us move toward a study of our students’ pre-existing understanding of data reliability. In our next post, we’ll say a little more about how we’re defining concepts like data literacy and expand on our conceptualization of reliability and bias. We look forward to moving from informal brainstorming sessions to engaging with other scholars of teaching and learning in our work. About the Authors Cora Wigger is an assistant professor of economics and a 2025–2027 CEL Scholar. Her research focuses on the intersections of education and housing policy, with an emphasis on racial inequality and desegregation. At Elon, she teaches statistics and data-driven courses and contributes to equity-centered initiatives like the “Quant4What? Collective” and the Data Nexus Faculty Advisory Committee. Amanda Laury Kleintop is an assistant professor of history and a 2025–2027 CEL Scholar. She specializes in the US Civil War, Reconstruction, and emancipation. Her book, Counting the Costs of Freedom (2025), explores debates about compensating former enslavers in the US and profitmaking in slavery. It inspired her historical data and digital humanities project on African American soldiers in the Border States. How to Cite This Post Kleintop, Amanda and Cora Wigger. 2025. “Data Literacy in Engaged Learning: Understanding Bias.” Center for Engaged Learning (blog), Elon University. July 22, 2025. https://www.centerforengagedlearning.org/data-literacy-in-engaged-learning-understanding-bias.