AI-generated image of a robot and a woman in a lab coat reading together in a library.
The image was created by ChatGPT using the following prompt:
Create image of a woman in a lab coat holding hands with a robot while they both look through a stack of academic journals in front of them. The setting is a library.

Would you use AI to create materials for a tenure portfolio? How about a reference letter for a student’s graduate school application? For a conference? If so, what would you do with it? As faculty consider the risks and benefits of AI use, one area to think about is the audience.

There are many different opinions about the propriety of AI, and it matters what those who receive your materials think. It turns out that it’s not just academics who do research. Industry does, as well. Wiley, an academic publisher, just released a dynamic report including a large, global survey of academics on the particular topic of AI use in research. 

The Findings 

When it comes to scholarship, more academics are finding ways to use AI tools. This is not the first year that Wiley has conducted their survey on AI use, and they have found that use is growing. The growth is not uniform, though. In the survey report, they break down their respondents by how long they have been using the tools, using diffusion-of-innovations language such as earlymid, and late adopters. The early adopters are less likely to think AI as a research tool is meeting all the goals they might have. 

This leaves a practical implication for academics. We should, and must, be involved in thinking about the utility of AI and the positive and negative ramifications of the tools for teaching, scholarship, and bigger things like the nature of knowledge and the functioning of society. But that thinking should be combined with being familiar with the tools themselves. Your opinion may change once you’ve tried it, according to Wiley’s data. 

For those who do use it, there are better and worse applications, according to the survey. You can think of them as higher- and lower-stakes tasks. Respondents were more likely to approve of AI assistance in conducting literature reviews, as a brainstorming partner, and in some writing tasks like proofreading. However, respondents thought that humans were better than AI at tasks that required judgment. This included tasks like critically evaluating information and peer reviewing the work of others. In other words, respondents don’t want you using AI to evaluate their work. 

When AI is used, researchers want to know the details. They wanted to know which tools were employed and specifics about the use, including actual prompts given to AI. At the same time, respondents felt that institutions, ranging from employers to grant agencies to publishers, were not meeting the moment in terms of AI guidance. In fact, only 41 percent felt that institutions gave enough guidance. And 57 percent saw that lack of guidance as a barrier to their doing their work. The company asked about the fields where their respondents worked, and later-career researchers in medicine and social sciences were more likely to feel hamstrung by the lack of guidance. Researchers in the Americas also cited lack of guidance more frequently than researchers from other geographic regions.  

Possible Takeaways from This Work

  • Audiences assess the stakes of AI support when deciding if AI use is appropriate. There is a difference between a tool fixing your subject-verb agreement and a tool telling you why a grant applicant is proposing an impactful problem. 
  • Audiences are on the same learning curve as prospective users. Lack of guidelines can reflect a lack of prior consideration, which means your choices may be judged in the moment. For faculty who supervise undergraduate researchers, modeling a process of figuring out AI use expectations prior to beginning a project may be useful.  
  • Experienced users are finding limitations to AI use. Researchers may need low-stakes “sandbox” experiences with AI tools to discover limitations separately from high-stakes research activities.  

As with any tool, AI use requires discernment, and its appropriateness varies on a case-by-case basis. 


References 

Wiley. 2025. ExplanAItions 2025: The Evolution of AI in Research. Hoboken, NJ: Wiley. https://www.wiley.com/en-us/about-us/ai-resources/ai-resources/ai-study/.


About the Author 

Amanda Sturgill, associate professor of journalism, is the 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience

How to Cite This Post 

Sturgill, Amanda. 2026. “Academic AI and Audience: Thoughts for Research.” Center for Engaged Learning (blog). Elon University. April 7, 2026. https://www.centerforengagedlearning.org/academic-ai-and-audience-thoughts-for-research/