Get all your news in one place.
100’s of premium titles.
One app.
Start reading
We Got This Covered
We Got This Covered
Jaymie Vaz

Professor drops chilling truth about why students are now sabotaging their own essays, and yes, it is to protect their academic careers

College students are now intentionally sabotaging their own work to avoid being flagged by AI detection tools. Dr. Sam Illingworth, a professor at Edinburgh Napier University in Scotland, recently shared his observations on Reddit, pointing out a disturbing pattern where students deliberately add typos and use bad grammar in their assignments.

It sounds counterintuitive, right? Well, it turns out they’re trying to fool the AI detectors. According to Newsweek, Illingworth noted that some students are even running their perfectly human-written papers through “AI humanizer” tools, all just to dodge those pesky false positives. 

He put it pretty starkly, saying, “We’ve created a system where competent writing is treated as suspicious.” This system makes students second-guess their own abilities and forces them into these weird strategies.

With AI checkers being a part of any formal submission process, sometimes you don’t even get to defend yourself

Getting falsely accused of using AI can have major consequences for students. Illingworth mentioned several instances where students were incorrectly penalized, compromising their studies. The big problem here is that AI detection systems just aren’t very good at what they do. As we rely on AI more, we are starting to see crazy stories about AI errors, whether it is AWS’s AI deleting code because it didn’t like it, or a child’s toy giving disturbing advice

A 2023 study that looked at 14 different AI-detection systems found that none of them could even hit 80 percent accuracy. Researchers identified “serious limitations” and even characterized these systems as “unsuitable” for detecting AI cheating in classrooms. They concluded, “Our findings strongly suggest that the ‘easy solution’ for detection of AI-generated text does not (and maybe even could not) exist.”

An April 2023 study from Stanford University revealed that a shocking 61 percent of essays written by non-native English writers were flagged by seven different AI-detection tools, and an astonishing 97 percent got flagged by at least one. James Zou, a senior author of that study, warned that these detectors are simply too unreliable right now and need serious improvements and rigorous evaluation.

Illingworth’s biggest concern with these tools is their inherent bias. He explained that false positives disproportionately affect students based on their race, nationality, or first language. He called it “institutional prejudice, automated and given a confidence score.” He also admitted that he can’t reliably spot AI writing just by eye, and since the technology is so good now, “basing academic consequences on [eye detection] is dangerous.”

Comment
byu/calliope_kekule from discussion
inProfessors

Despite this, Ilingworth believes that there is genuine potential for AI in education. It isn’t a discipline issue with students, but a rational adaptation to the tools available to them. He argues that it falls to the educators to create use cases for it outside detection, which “is a dead end.” This includes using it as a thinking partner or drafting tool, critically and ethically. One that can also be passed to the students. 

This way, they don’t have to police something they aren’t equipped to understand.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.