Is AI going to make our lives worse? An interesting study came out recently from Elon’s Imagining the Digital Future Center (Rainie and Husser 2024) that sort of asked that question, in relation to deep fakes (United States Government Accountability Office 2020) and politics that found that, in that domain at least, the attitude was dour. 

At my campus and at others around the world, instructors and administrators have been asking themselves the same question, with a focus on education. The questions are dramatic, and sometimes they are existential.  

  • Is the essay dead? 
  • Do we really have to interpret student chicken scratch created in class? 
  • Do you need to teach coding if a machine produces code at the click of a button? 
  • Do we need to teach first graders prompt engineering? How about college sophomores?
  • How do we convince learners and families that higher education still matters? If the chatbot can solve your problems, DOES higher education still matter?

One thing that was very interesting in that poll was a question about first-hand experience with AI, with fewer than half of respondents reporting having tried it. Among faculty I know, the reactions have been similar to those reported in trade press articles (Mowreader 2024) about the topic. A few early adopters have incorporated AI into their classes and research with glee. Many colleagues have tried a few things. A few others see only the downsides and want nothing to do with it. It’s not too hard to place those reactions into an older model of technology acceptance: Rogers’s Diffusion of Innovations (Rogers 2003). 

Graph showing a bellshaped curve of innovators, early adopters, early marjority, late marjority, and laggards
By Rogers Everett – Based on Rogers, E. (1962) Diffusion of innovations. Free Press, London, NY, USA., Public Domain, https://commons.wikimedia.org/w/index.php?curid=18525407

If you think of AI as a disruptive general technology, a newer model might have some merit as well: the Gartner Hype Cycle. This is designed to help investors think about whether buzzy new technology merits support. As the company says on its website, ” If you’re willing to combine risk-taking with an understanding that risky investments don’t always pay off, you could reap the rewards of early adoption.” The company suggests new technologies go through a series of steps in the market as they become integrated into work:

  • An innovation trigger
  • The peak of inflated expectations
  • The trough of disillusionment
  • The slope of enlightenment
  • The plateau of productivity

Immediate risk and reward might be strange concepts in academia, with its esteem for thorough reflection and consideration, not to mention the consensus-based governance. But as Mowreader reports (2024), “Among students, 72 percent of respondents to a winter 2023 Student Voice survey by Inside Higher Ed and College Pulse believe their institution should be preparing them ‘a lot’ or ‘somewhat’ to use AI in the workplace.”

When it comes to generative AI and higher education, it seems like we are somewhere between steps 2 and 3. It’s tempting to revert to the “more research is needed” stance that ends so many research articles. And that’s true. The earliest studies have just started to come out, and most are think pieces or at best limited case studies that have heuristic value, but haven’t yet accreted the weight of evidence needed for confidence. 

References

Gartner. n.d. “Gartner Hype Cycle.” Gartner, Inc. https://www.gartner.com/en/research/methodologies/gartner-hype-cycle

Mowreader, Ashley. 2024. “Academic Success Tip: Infusing AI into Curricular Offerings.” Inside Higher Ed. May 7, 2024. https://www.insidehighered.com/news/student-success/academic-life/2024/05/07/how-professors-are-using-and-teaching-generative-ai

Rainie, Lee, and Jason Husser. 2024. “National Public Opinion Poll: AI and Politics ’24.” Imagining The Digital Future. May 15, 2024. https://www.elon.edu/u/elon-poll/wp-content/uploads/sites/819/2019/01/110314_ElonPoll_Issues.pdf

Rogers, Everett. 2003. Diffusion of Innovations, 5th Edition. New York: Simon and Schuster.

United States Government Accountability Office. 2020. “Science and Technology Spotlight: Deep Fakes.” GAO| Science, Technology and Analytics Assessment, February 2020. https://www.gao.gov/assets/gao-20-379sp.pdf

About the Author

Amanda Sturgill, Associate Professor of Journalism at Elon University, is the 2024-2026 CEL Scholar, focusing on the intersection of artificial intelligence (AI) and engaged learning in higher education. Connect with her at asturgil@elon.edu.

How to Cite this Post

Sturgill, Amanda. 2024. “AI, Higher Ed and the Hype Cycles.” Center for Engaged Learning (blog), Elon University. June 20, 2024. https://www.centerforengagedlearning.org/ai-higher-ed-and-the-hype-cycle/.