HomeBlogCEL Scholar AI and the Truthiness of Searchby Amanda SturgillOctober 18, 2024 Share: Section NavigationSkip section navigationIn this sectionBlog Home AI and Engaged Learning Assessment of Learning Capstone Experiences CEL News CEL Retrospectives CEL Reviews Collaborative Projects and Assignments Community-Based Learning Diversity, Inclusion, and Equity ePortfolio Feedback First-Year Experiences Global Learning Health Sciences High Impact Practices Immersive Learning Internships Learning Communities Mentoring Relationships Online Education Place-Based Learning Professional and Continuing Education Publishing SoTL Reflection and Metacognition Relationships Residential Learning Communities Service-Learning Student-Faculty Partnership Studying EL Supporting Neurodivergent and Physically Disabled Students Undergraduate Research Work-Integrated Learning Writing Transfer in and beyond the University Style Guide for Posts to the Center for Engaged Learning Blog I like to have an engaging activity for my first-years on the first day of class, and this year, I did a sticky note discussion. Students have a series of prompts, write their responses on sticky notes, and we put them around the room. Then the students visit the different questions and try to summarize themes. One of them was about how AI could be helpful to a student. It was interesting . . . many of the students focused on AI as a thought partner—a tool for ideation. A few talked about AI as a writing fixer—helping with grammar and spelling. And one said he uses AI to check facts. “Ruh roh,” I thought. My own trials with AI tools have made me think of it like a person who has trouble saying, “I don’t know.” To solve this problem, AI mixes what AI does know with truthy “facts.” Since it’s all said with confidence, it’s hard to know what’s true. I made a point to mention to my students in class that AI might be useful for a lot of things, but validation isn’t something I’d trust it with right now. Honestly, fact-checking has gotten easier in some ways, but a lot harder in others. It’s been a few years now since I got a paper from a first-year student where an in-text citation was just “according to Google.” I feel like I’ve developed some student-friendly ways to explain how search affects what’s considered to be knowledge. It’s pretty well-known that search engines use algorithms to try to return appropriate information for what the user asks, hence the Search Engine Optimization (SEO) industry. And I’m old enough to remember doing some early studies of how people process information from search pages. A lot more has been done since. Several studies conclude that search result bias significantly influences decision-making, including affecting users’ ability to make accurate decisions (Pogacar, Ghenai, Smucker, and Clarke 2017; Novin and Meyers 2017). Specifically, research suggests that users pay more attention to and engage more with higher-ranking results, which of course presents a risk when you use the search algorithm as a shortcut to what is correct or valuable, “according to Google” (Pogacar, Ghenai, Smucker and Clarke 2017; Bettencourt et al. 2022; Novin and Meyers 2017). Furthermore, the way search results look seems to affect how users work with the information (Novin and Meyers 2017; Azzopardi, White, Thomas, and Craswell 2020, Bettencourt et al. 2022). Finally, there are factors in the users themselves such as age, cognitive bias, and how much they already know about a topic (Pogacar, Ghenai, Smucker, and Clarke 2017; Ghenai, Smucker, and Clarke 2020; Bettencourt et al. 2022). This matters when it comes to AI summaries at the top of those search pages. The photos below show what these summaries look like on the top search engines, Google and Bing. Screenshot of Google search engine. Screenshot of Microsoft Bing search engine. On Google, the AI result is clearly labeled as an AI overview and is at the top of the page, both in browser and mobile search. Interestingly, the AI results were different in desktop browser and mobile for me. In Bing, the summary is after the first two search results in browser and at the top on mobile. They are different from the other, and neither is labeled as AI generated, but the facts are footnoted. Search engine companies constantly run tests on user reactions to their services and make small changes, so what I see today (Sept. 1, 2024) may well differ from what you see if you try it when you are reading this. Those frequent changes make fields like SEO both fascinating and frustrating. Companies that advise businesses on SEO will sometimes run their own studies. SEO tool company SEO Clarity published an interesting research study of their own on the content of AI summaries, finding that generally they are put together using the top-ranked sites as the input for the AI generation. So, AI-generated summaries would not, at present, be that different factually from the top sites, according to their algorithm. Like all in SEO, though, that could change at any time. References Azzopardi, Leif, Ryen W. White, Paul Thomas, and Nick Craswell. 2020. “Data-Driven Evaluation Metrics for Heterogeneous Search Engine Result Pages.” In Proceedings of the 2020 Conference on Human Information Interaction and Retrieval , 213–222. New York: Association for Computing Machinery. https://doi.org/10.1145/3343413.3377959. Azzopardi, Leif, Paul Thomas, and Nick Craswell. 2018. “Measuring the Utility of Search Engine Result Pages an Information Foraging Based Measure.” In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, 605–614. New York: Association for Computing Machinery. https://doi.org/10.1145/3209978.3210027. Bettencourt, Benjamin, Arif Ahmed, Nic Way, Casey Kennington, Katherine Landau Wright, and Jerry Alan Fails. 2022. “Searching for Engagement: Child Engagement and Search Engine Result Pages.” In Proceedings of the 21st Annual ACM Interaction Design and Children Conference, 479– 484. New York: Association for Computing Machinery. https://doi.org/10.1145/3501712.3535316. Ghenai, Amira, Mark D. Smucker, and Charles L.A. Clarke. 2020. “A Think-Aloud Study to Understand Factors Affecting Online Health Search.” In Proceedings of the 2020 Conference on Human Information Interaction and Retrieval, 273–282. New York: Association for Computing Machinery. https://doi.org/10.1145/3343413.3377961. Novin, Alamir, and Eric Meyers. 2017. “Making Sense of Conflicting Science Information Exploring Bias in the Search Engine Result Page.” In Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval, 175–184. New York: Association for Computing Machinery. https://doi.org/10.1145/3020165.3020185. Pogacar, Frances A., Amira Ghenai, Mark D. Smucker, and Charles L.A. Clarke. 2017. “The Positive and Negative Influence of Search Results on People’s Decisions About the Efficacy of Medical Treatments.” In Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval, 209–216. New York: Association for Computing Machinery. https://doi.org/10.1145/3121050.3121074. About the Author Amanda Sturgill, Associate Professor of Journalism at Elon University, is a 2024-2026 CEL Scholar, focusing on the intersection of artificial intelligence (AI) and engaged learning in higher education. Connect with her at asturgil@elon.edu. How to Cite this Post Sturgill, Amanda. 2024. “AI and the Truthiness of Search.” Center for Engaged Learning (blog). Elon University, October 18, 2024. https://www.centerforengagedlearning.org/?p=10099.