In higher education, Artificial Intelligence (AI) offers real benefits like time savings and a tireless thought partner. But it can also mean significant anxiety for students and their instructors alike.  

What Makes People More Likely to Accept AI? 

Research indicates that there are several factors that relate to how much someone accepts Generative AI (Ma and Huo 2023; Zhu et al. 2024). Here are some examples:  

  • Social influence: Do users believe other users find utility in AI? 
  • Pleasure: Is using AI enjoyable? 
  • Distinctiveness: Does AI offer unique and new benefits?  
  • Age: Users are more likely to embrace AI if they are younger.  

What Makes People Less Likely to Use AI? 

AI offers benefits, but there are also ethical issues. Some include not being able to understand how the AI is working (Ma and Huo 2023). Users may have concerns such as knowing that the data AI trained on were biased, so AI results are likely to be biased (Ma and Huo 2023; Berlinsky, Morales, and Sponem 2024; Biden Issues 2023). These scholars and others have identified factors that discourage students from adopting AI. These include: 

  • Ethical anxiety: Users who expressed concerns about the ethics of AI were less likely to use it.  
  • Perceived risks: Users felt that AI offered risks such as inadvertently plagiarizing, losing control of private data, or loss of ability to do tasks now done by machines,  were less likely to use it. 
  • Price value: Cost plays a significant role in students’ decisions to use generative AI products (Zhu et al. 2024). 

Students and Faculty See Things Differently 

While popular articles and pundits note that AI has a lot of potential for changing education, existing studies suggest that students and faculty have different considerations. 

Black and white robot sits behind teacher's desk in front of a classroom with chalkboard behind it. Robot has a slight frown.
Image generated using Adobe Illustrator AI Generator (2024).

Interesting differences include: 

  • Timeframe: Students are more interested in practical use for the tasks they must complete immediately (Zhu et al. 2024), while faculty are more interested in thoughts about the longer-term status of education (Gilbert 2024).  
  • Threats: Although students and faculty both have concerns about AI use as potential plagiarism, students worry about their personal accountability (Klee 2023), and faculty worry more about costs to institutions and society. Faculty also worry about the long-term effects on student capability when they don’t meet learning objectives because AI does their assessments (Stover 2023).  

It’s worth considering how, when, and by whom generative AI will be accepted. Today, the use of AI as a higher education tool is raising both eyebrows and thumbs.  


References 

Berlinski, Elise, Jérémy Morales, and Samuel Sponem. “Artificial imaginaries: Generative AIs as an advanced form of capitalism.” Critical Perspectives on Accounting, no. 99 (2024). https://doi.org/10.1016/j.cpa.2024.102723

Gilbert, Thomas Krendl. “Generative AI and generative education.” Annals of the New York Academy of Sciences, no. 153 (2024): 11-14. https://doi.org/10.1111/nyas.15129

Klee, Miles. “She was falsely accused of cheating with AI—and she won’t be the last.” Rolling Stone, June 6, 2023. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/

Ma, Xiaoyue, and Yudi Huo. “Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework.” Technology in Society , no. 75 (2023): 102362. https://doi.org/10.1016/j.techsoc.2023.102362

Stover, Dawn. 2023. “Will AI Make Us Crazy?” Bulletin of the Atomic Scientists 79, no. 5 (September): 299–303. https://thebulletin.org/premium/2023-09/will-ai-make-us-crazy/.

The White House. “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” October 30, 2023. Accessed September 15, 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.  

Zhu, Wenjuan, Lei Huang, Xinni Zhou, Xiaoya Li, Gaojun Shi, Jingxin Ying, and Chaoyue Wang. “Could AI Ethical Anxiety, Perceived Ethical Risks and Ethical Awareness About AI Influence University Students’ Use of Generative AI Products? An Ethical Perspective.” International Journal of Human–Computer Interaction (March 2024): 1–23. https://doi.org/10.1080/10447318.2024.2323277

About the Author­­­­ 

Amanda Sturgill, Associate Professor of Journalism at Elon University, is a 2024-2026 CEL Scholar, focusing on the intersection of artificial intelligence (AI) and engaged learning in higher education. Connect with her at asturgil@elon.edu

How to Cite this Post 

Sturgill, Amanda. 2024. “AI and Anxiety: Perspectives from Higher Education.” Center for Engaged Learning (Blog). Elon University, November 15, 2024. https://www.centerforengagedlearning.org/ai-and-anxiety-perspectives-from-higher-education/.