How to Respond to Cheating While Not Being A Jerk
A playbook for navigating academic plagiarism
The other day, I helped an adjunct instructor teaching an undergraduate business course work through a common academic integrity scenario. This exchange took place over email, and I thought sharing the key takeaways from this asynchronous discussion might help those who are facing similar challenges this semester.
The Scenario
An undergraduate business instructor asks their students to write a 1-2-page paper in response to the following prompt:
Assignment Prompt
Imagine you’re the social media manager for a large hotel chain. Guest reviews on platforms like TripAdvisor and Expedia play a big role in shaping travelers’ decisions, and replying to reviews helps build brand trust and maintain a customer-centered reputation. You’ve been tasked with helping local managers respond to all customer reviews. The chain’s top managers want to integrate an AI tool that would automatically respond to customer reviews across several social media and travel booking platforms. The chatbot can decipher positive and negative reviews, generate tailored responses, and send responses to complex reviews to managers for final approval. Customers would not be notified that the responses were AI-generated, and managers plan to use the bot to capture contact info for marketing.
Write a 1-2-page paper explaining whether you’d approve this tool, and if so, what recommendations you have for your bosses.
The Eye Test, or the AI Test?
The instructor goes to grade their students’ papers and comes across one that appears to be primarily written by AI. They uploaded the paper to TurnItIn, which flagged 6 of the paper’s 21 sentences as likely artificially generated. The instructor emailed the student, naming their suspicions about the work and asking for an explanation of the student’s assignment completion process.
I used ChatGPT to paraphrase the students’ response:
“I started this assignment by carefully reading the case study in the textbook and used that material to guide my understanding of the key issues. I then used Microsoft Edge and CoPilot to explore those topics further, focusing on relevant keywords to deepen my comprehension. When I find helpful information, I sometimes copy short excerpts into a separate notes or rough draft document to organize my thoughts, but I never copy anything directly into my final submission.
My phrasing might have been misinterpreted as unoriginal or plagiarized, due to my use of common business terminology. In the MBA program, students develop a shared academic vernacular, which can make our writing sound similar to existing literature.
I hope this explanation clarifies how I approached my research and writing process for this assignment. Thank you again for your understanding—I look forward to your response.”
The instructor is unsure of what to do next: Should they take the student’s word for it and grade the assignment per usual? Should they ask more clarifying questions to better understand how the student used Copilot and suggest more responsible processes? Should they push back to get the student to admit their wrongdoing? Should they inform their dean to seek guidance on what to do next?
Here’s what I told the instructor (via email):
“The main thing I’d recommend is providing more specific assignment instructions along with a rubric so students understand what you’re looking for. In the paper, the student discusses how human oversight is necessary for handling complex reviews and for auditing responses, but they fail to discuss how they would implement the oversight. (Presumably, anyone could acknowledge that human oversight is important for chatbots - that’s not an original thought.)
Providing students with a set of ethical or analytical frameworks to use to navigate these scenarios might lead to more original and detailed responses. I’d also challenge you to give explicit guidance on information sources (press releases, case studies, research papers) you’d want them to cite to generate their responses. [The students’ paper didn’t contain any citations.] In education policy (my field), we have clearly defined frameworks that policy analysts use to reveal different characteristics of education policies. I expect students to use those frameworks and to name them in their reflections. I also give them examples of where to reliably find information to inform their analysis (school district websites, state legislative websites, journalistic news coverage, etc.).
I’d imagine that a generic response like this wouldn’t be helpful in a real-life business setting. The student said they’d approve the AI tool’s integration, but they’d need to figure out a ton of logistics after the decision is made, which they don’t speak to at all.
Technically, the student met all the assignment requirements, partly due to the absence of an assignment rubric that requires the use of specific frameworks, skills, information sources, and thought processes. And if the student’s use of AI isn’t explicitly prohibited through your course-level AI policy or assignment instructions, then there’s little grounds to penalize the student. You may choose to make suggestions in your assignment feedback, but it may be best to assess the student’s work based on what’s there and make your future assignment expectations more explicit.”
This instructor’s real issue—though seldom communicated—appears to be twofold: 1) the work reads like it wasn’t written by the student, and 2) the work is not efficacious, meaning that it would be ineffective in a real-life scenario.
Let’s take these concerns one by one.
The work reads like it wasn’t written by the student
Humans are hardwired to mistrust artificially-generated content. Psychologists might explain this mistrust by pointing to the concept of affective heuristic, which is our tendency to make judgments based on feelings rather than hard data. LLMs can’t feel, reason, or conceptualize the truth, highlighting their inability to experience affective heuristics like humans do. Most generative AI platforms rely on similar transformer models and neural network structures, which produce repetitive language patterns that we’re becoming more and more accustomed to recognizing. Since GPTs are built to produce responses that are apolitical, cautious of making sweeping claims, and contain limited bias, these responses are often more general and less persuasive than human writing. And common concerns about accuracy (hallucinations) and ethics contribute to widespread mistrust of AI outputs. Producing work that is technically accurate but untrustworthy makes it unviable in the “real world.” Imagine a lawyer asking AI to write a memo for a client. The lawyer could say, “But I checked all the information—it’s correct”, but would the client be satisfied paying $250 an hour to a lawyer who, instead of putting their years of legal experience to work, offloads the task to an emotionless, truth-oblivious chatbot that can’t reason?
Taking all this into consideration, we can shift our focus from making accusations of irresponsible AI use to giving students feedback on how to avoid producing work that includes the characteristics often found in AI-generated work. This includes things like descriptions of concepts that lack plausible, detailed examples informed by lived experiences, and the overuse of adjectives and em dashes. (More on this later in this piece.)
The work is not efficacious, meaning that it would be ineffective in a real-life scenario.
Because plagiarism checkers are so unreliable, and because it’s nearly impossible to definitively prove that a student quoted or paraphrased AI without attribution, I don’t even bother accusing or investigating AI misuse. I’ve found that AI-generated writing isn’t capable of producing A-level work in my course without considerable oversight, prompting, and editing from the creator, all of which require skills that align with my course’s learning objectives. If students can use AI to produce high-quality work that doesn’t appear to be AI-generated, contains accurate information, and contains accurate citations, then they’ve produced a piece of work that would be viable beyond my class. They’ve met the assignment criteria. Though I would hope that all of my students are ethically completing assignments, I don’t have the time, energy, or desire to be an AI investigator. In circumstances where AI appears to be quoted or paraphrased without attribution, I give feedback just as I would for human-created work. I’d point out characteristics:
Overuse of adjectives and em dashes
Exclusive use of active voice (i.e., “There are better ways” instead of “There are ways that are better.”)
Descriptions of concepts that lack plausible, highly detailed examples informed by lived experience
Fabricated sources or untrue facts
Vague statements (i.e., “Policy makers are influential in local communities” instead of “State senators pass legislation that directly impacts citizens of Baltimore City”)
Distorted or inconsistent formatting of text and visual elements
Inconsistent diction, structure, and flow, as if one or multiple paragraphs were written by another author
Morally-ambiguous writing; writing where the author lacks clarity, conviction, intent, or emotional rhetoric
Writing quality that doesn’t match the writer’s years of experience in the field and the course level
If you’re not willing to take such a loose stance yet—or to navigate more complex situations than the one described above—remember that we have the following tools and practices at our disposal:
As suggested in Boston University’s academic conduct process, communicating with the student should be the first step in resolving any potential plagiarism violations.
As suggested by the University of Kentucky, don’t treat plagiarism detection results as the end-all, be-all. It may be best to avoid using the checkers altogether. However, if you do use plagiarism checkers, you can “set ranges of percentages that lead to various actions:
Meeting with the instructor and discussing the assignment;
Having students go over the report from the plagiarism detector, making revisions on their own, and attaching a brief summary of what they did;
Discussing the reports in groups in class, looking for similarly flagged work and providing a summary to the instructor;
Meeting with a representative of the Writing Center, either on their own or during a class session, to discuss the report and to create a plan for revising as necessary.”
Ask for an assignment submission statement in which they describe their process for completing the assignment. That allows you to give feedback on their process overall, and may reveal ways to balance efficiency and rigor.
Ask students to upload their AI chat history. If AI fluency is an explicitly stated learning outcome for an assignment, you might even grade students’ chat history.
Ask students to submit pre- and post-AI versions of assignments. If a student says they used AI to “polish” their writing after creating an original draft themselves, they should be able to produce the original draft and the AI-enhanced draft.
Become familiar with your university’s procedures for handling academic integrity violations. UNC Greensboro offers a great example.
The Bottom Line
I aim to lead with trust and skill development, not catching students in the act or teaching moral responsibility. The bottom line: people don’t trust work that appears to quote or paraphrase the outputs of generative AI chatbots. I’ve made clear to my students the typical characteristics of these outputs. Work that contains these characteristics will receive feedback and grades accordingly. And if they find more efficient ways to produce high-quality work with AI that avoids those characteristics, more power to them.
If you found this piece helpful, you’ll also enjoy:




