What To Do When You’re Accused of AI Cheating
AI detectors like Turnitin and GPTZero suffer from false positives that can accuse innocent students of cheating. Here’s the advice of academics, AI scientists and students on how to deal with it.
August 15, 2023
Your teacher says you used artificial intelligence to cheat. You did no such thing. Now what?
Stay calm, and let the facts help you.
Less than a year into the life of ChatGPT, teachers everywhere are getting AI-detecting tools that promise to expose when students use chatbots to cheat. By August, an AI detector made by the plagiarism-detection company Turnitin had already been run on more than 70 million assignments, it said.
AI surveillance might deter cheaters. But sometimes these detectors get it wrong, too. And even a small “false positive” error rate means some students could be wrongly accused — an experience with potentially devastating long-term effects.
After I wrote about the arrival of the AI detector from Turnitin, I heard from many angry high school and college students (and some of their parents) claiming they had been falsely accused of AI cheating.
So I asked some of them how they handled the accusations, and I also sought some advice from experts in academic integrity and AI.
The clear lesson: You can fight back. Many told me sharing an article like this one with the instructor is a good place to start. (Hi, teachers. I’m on your side, too! I don’t want us to misuse tech in ways that could have dire consequences.)
Here are steps to consider. Hopefully you can find resolution before you get to the end of the list.
Start With a Non-Accusatory Conversation
I completely understand if you’re upset, but approaching your accuser with guns blazing might make matters worse.
Several students told me that arguing with their instructors came with a cost: Their teacher stopped trusting them and gave them a bad grade in the end anyway. But in other cases, a polite back-and-forth conversation resulted in an A.
“Speak directly to the instructor, in as polite and conciliatory terms as possible,” said Christian Moriarty, a professor of ethics and law at St. Petersburg College in Florida, who studies academic integrity. “Escalating makes everybody go on the defensive.”
Part of education is learning to advocate for yourself. Explain how you either didn’t use AI at all or only used it within the terms that was permitted for the course.
Just remember: This is new to everyone. Many instructors haven’t yet had a chance to learn how Turnitin’s AI reports work, which is different from the plagiarism reports the software has offered for years. With AI, a detector doesn’t have any “evidence” — just a hunch based on some statistical patterns.
Even Turnitin says everyone should take a chill pill. “The first step should always be to have a conversation with the student,” said Turnitin’s chief product officer, Annie Chechitelli. “Our guidance is, and has been, that there is no substitute for knowing a student, knowing their writing style and background.”
Bring Along Data About AI Detector Errors
AI detectors might present scientific-looking percentages or scores, but nobody should treat those results as fact.
Too many educators think AI detectors are “a silver bullet and can help them do the difficult work of identifying possible academic misconduct,” said Sarah Eaton, an education professor at the University of Calgary and the editor of the International Journal for Educational Integrity. “The reality is that these products are not perfect.”
My favorite example of just how imperfect they can be: A detector called GPTZero even claimed the U.S. Constitution was written by AI.
In July, OpenAI — the company that made ChatGPT — shut down its own AI detector tool “due to its low rate of accuracy.”
In June, Turnitin reported that on a sentence-by-sentence level, its software incorrectly flags 4% of writing as being AI-generated. There is a higher incidence of these false positives in cases where Turnitin detects that less than 20% of a document is AI-generated.
“Teachers should be using AI reports as resources, not deciders, and educators always make final determinations,” Chechitelli said.
Certain types of writing, especially technical topics, are more likely to be erroneously flagged as AI-generated. There are only so many ways one can explain cellular mitosis, so your words may be less likely to stand out as human. (I explain some of the science of how an AI detector looks at writing that’s suspiciously average in this column.)
There’s research that suggests detectors are biased against nonnative English speakers, and several of the students who shared their experiences were writing in English as a second, or even a third, language.
Want to drive home the point? Run some of your other writing dated before the arrival of ChatGPT in the fall of 2022 through an AI detector, to see whether any of it gets flagged. If it does, the problem is clearly the detector, not the writing. (It’s a little aggressive, but one student told me he did the same with his instructor’s own writing to make the point.)
Some well-known AI scientists argue that the error rate in AI detectors means they just shouldn’t be allowed. “I think these tools should be banned and students shouldn’t be put in a position of having to do this type of stuff,” said Timnit Gebru, the executive director of the Distributed AI Research Institute. "The responsibility lies with the schools and governments.”
But don’t expect them to go away any time soon. “We will continue to improve our AI writing detection systems and we remain dedicated to making it available to educators,” Chechitelli said.
Try To Prove the Originality of Your Work
When your work gets flagged, your instructor might expect you to prove you didn’t use AI to cheat. That isn’t exactly fair — how can you prove a negative?
But you might be able to avoid more trouble by offering some evidence that you really did the work.
Several students I spoke with suggested Google Docs or Microsoft Word could help. Both offer a version history function that can keep track of changes to the file, so you can demonstrate how long you worked on it and that whole chunks didn’t magically appear. Other students recommended simply screen recording yourself writing.
Copying and pasting from an AI program also has some telltale signs that should be missing in an entirely original student work. For example, ChatGPT text uses a unique font. And all chatbots have a well-documented problem with making up facts and sources. (That also means if your essay has made-up facts or completely fabricated footnotes, it could be a sign you let AI do the writing.)
There are more traditional ways to show your work, too, including offering to do a live oral presentation. If you know your writing has been flagged as AI in the past, perhaps tell your instructor that upfront and seek feedback on drafts while you’re in process.
Another way to avoid being flagged in the first place: Be sure to write with a unique voice. “Writing with your own style with language you understand and commonly use will help show that you are the unique author of the assignment,” said Christopher Casey, the director of digital education at the University of Michigan at Dearborn.
Understand Your Right To Due Process
If a polite conversation doesn’t work, it’s time to learn your institution’s official rules for academic misconduct.
“It is perfectly reasonable to file an appeal or complaint or whatever your institution calls it, to be able to say I did everything correctly and the instructor is saying I didn’t,” Moriarty said.
Many universities have an office to help students negotiate academic misconduct, sometimes called an ombudsperson or student affairs office.
In some cases, instructors have gone rogue by using unauthorized AI tools to accuse students of cheating. Some higher education institutions ban them or have very specific guidance about how they’re supposed to be used. Also, check the course syllabus for what it did or didn’t say about using AI for the class.
Most of the time, instructors shouldn’t be using detection software “in a search-and-destroy way rather than in a way that supports student learning” Eaton said.
It’s worth noting that some instructors have actually come to the conclusion that they can’t stop students from using AI. False accusations are “the crux of the issue with trying to ban students from using AI, especially for homework or online courses where students can and should not be monitored 24/7,” Casey said. (This fall, his campus will not allow AI detection reports to be part of any academic integrity processes.)
If all else fails and you need to pass a class to graduate, you or your parents could talk to a lawyer. “If you feel they didn’t follow their procedures, you might be able to have some sort of lawsuit,” Moriarty said. “But it’s a high bar, and it’s probably more money than a lot of people are willing to spend.”
About the Author
You May Also Like