I am a professor of journalism, and it’s a challenging time to make the case for why journalism still matters. I give my students a couple of reasons: first is the ability to access and make sense of a variety of sources, such as databases and human experts. Second is that professional journalists follow a code of ethics that lets audiences know what the rules were for finding and presenting information. The first element, “seek truth and report it,” covers important concepts like reporting fairly, inclusively, and without self-censoring. 

AI and Ethics

At the university level, it’s common to have ethical standards as well, summarized in an honor code. I was fascinated to read this fall 2025 article in nature about the tolerance for ethical lapses in the face of AI. The authors note that as humans delegate tasks to machines, it might be a source of more ethical lapses. A student might, for example, delegate text production for an assignment to an AI tool. These tasks may seem like insignificant responsibilities, such as checking for grammatical correctness, but they add up to a loss of learning and critical thinking. This has some compelling implications for engaged learning.  

The authors state that a “moral cost” of appearing unethical deters immoral behaviors. In other words, people may choose the high road because to do otherwise may harm their reputations. But work that is delegated to AI, the authors argue, can create a “black box” of a process. For example, if a student asks an AI tool to write a research paper, the AI may choose to address the parameters of the query by fabricating realistic-sounding information from realistic-looking sources. It makes things up.  

To sit and deliberately type up fabrications for an assignment may feel more morally costly than unquestioningly taking AI output, because students feel more involved with the process. My readings and conversations with students have suggested that students often focus on the product as what professors want, not the process for creating that product.  

The study, which had a large number of trials and participants, found that participants were more likely to cheat if they were less obviously involved in the process. This has some clear implications for academic integrity, as it is presently understood. It also matters when our work with students brushes against the world outside the university. For example, when students participate in community-based learning, while AI tools might increase productivity in completing tasks, it could set a bad example in a context of youth tutoring. It might cause harm if products created with AI have integrated bias from the tool.   

Real World Consequences

Students in internships may encounter AI as part of the workflow, depending on the field. While regular employees may understand what is considered in-bound vs. out-of-bound use, that may be less clear for the intern. For example, it might be acceptable for a teacher to use AI tools to generate differentiated lesson plans, but not ok to ask AI to write comments for student work. That boundary might not be communicated to the student teachers. Research has shown that when young employees are uncertain about how to manage novel tasks at work, they lean on tools to help decode tasks and to provide examples. Other research indicates that generative AI is a popular tool for this. Student internship preparation could include guidance on how to figure out what’s acceptable, but employers may need to make it clear what the consequences would be for a moral lapse.  

Generative AI is a tool that can reshape moral boundaries in many different contexts. Students should learn how and why to explicitly address moral implications of access with AI. 

References 

Baird, Neil, Alena Kasparkova, Stephen Macharia, and Amanda Sturgill. 2022. ““What One Learns in College Only Makes Sense When Practicing It at Work”: How Early-Career Alumni Evaluate Writing Success.” In Writing Beyond the University: Preparing Lifelong Learners for Lifewide Writing, edited by Julia Bleankney, Jessie L. Moore, and Paula Rosinski. Elon University Center for Engaged Learning. https://doi.org/10.36284/celelon.oa5.10

Flaherty, Colleen. 2025. “How AI Is Changing—Not ‘Killing’—College.” https://www.insidehighered.com/news/students/academics/2025/08/29/survey-college-students-views-ai. 

Greene-Santos, Aniya. 2024. “Does AI Have a Bias Problem?” https://www.nea.org/nea-today/all-news-articles/does-ai-have-bias-problem. 

Iowa State University. n.d. “Academic Integrity.” https://celt.iastate.edu/prepare-and-teach/facilitate-learning/academic-integrity/

Köbis, Nils, Zoe Rahwan, Raluca Rilla, et al. 2025. “Delegation to artificial intelligence can increase dishonest behaviour.” nature. https://doi.org/10.1038/s41586-025-09505-x. 

Society of Professional Journalists. 2014. “SPJ Code of Ethics.” Revised September 6, at 16:49 (CT). https://www.spj.org/spj-code-of-ethics/

About the Author 

Amanda Sturgill, associate professor of journalism, is the 2024-2026 CEL Scholar. Her work focuses on the intersection of artificial intelligence (AI) and engaged learning in higher education. Dr. Sturgill also previously contributed posts on global learning as a seminar leader for the 2015-2017 research seminar on Integrating Global Learning with the University Experience.  

How to Cite This Post 

Sturgill, Amanda. 2025. “Generative AI and Professional Ethics.” Center for Engaged Learning (Blog). September ##, 2025. https://www.centerforengagedlearning.org/?p=11519