Australian universities are split on whether to adopt a new tool which claims to detect AI-generated plagiarism with a near-perfect success rate, citing concerns over out-of-date models and the minimal notice the sector was given to assess the issue.
Turnitin’s detection tool, launched this month, cites a 98% efficacy rate at picking up the “high probability” of AI.
Of almost a dozen universities who responded to Guardian Australia, the University of Melbourne, the University of New South Wales and Western Sydney University have adopted the tool and several were considering integrating it into their detection programs.
But others said the Turnitin tool was rushed and raised concerns over its efficacy.
Deakin University associate professor in digital learning, Trish McCluskey, said despite Turnitin’s alleged high efficiency rate, it hadn’t had the opportunity to test the claim prior to the public release of the tool.
“Education providers … are also concerned the tool has been trained using out-of-date AI text generator models,” McCluskey said.
“This overlooks the fact AI text generators constantly evolve in the complexity of their outputs, as has been widely reported with the recent implementation of ChatGPT 4.”
The University of Sydney has also declined adopting the AI detection feature without “adequate testing or visibility”.
“Our students have clearly told us we have a responsibility to teach them how to use AI tools properly and … develop their critical reasoning – recognising their futures will require this skill,” a spokesperson said.
“AI can help students learn and will be used in jobs of the future … we need to teach our students how to use it effectively and legitimately.”
The university has opted to revise assessments to prevent cheating, including more oral assessments, drafts and replacing some face-to-face or pen-and-paper exams.
The University of Wollongong said Turnitin’s tools were launched with “minimum notice” and “several issues” needed to be resolved before it committed to integrating the service.
“We would need confidence in its effectiveness – including being satisfied it is not incorrectly detecting use of generative AI chatbots at a significant rate,” a spokesperson said.
The UoW has updated its academic integrity policy to allow students to use ChatGPT with acknowledgement, giving academics the green light to integrate AI into teaching and assessment.
Griffith University has followed suit, incorporating the technology into learning and assessment and updating its student misconduct policies to recognise the emerging technology – including how to properly attribute AI sources.
Monash University, RMIT, UWA and ANU have also decided against using the tool while in its infancy.
Eric Wang, global head of Turnitin’s AI team, said the tool provided a degree of detection for educators based on the way AI writing systems tend to use “high probability words” in a way similar to predicted text on phones.
“We strongly feel like we succeeded,” he said.
“It’s not meant to be a punitive tool … where you’re making substantive decisions on a student’s future … it’s meant as a demonstration of where we’re headed.”
The University of Melbourne, which already uses Turnitin, has adopted the tool as one of many that could act as a “flag” for further investigation.
Western Sydney University has also adopted the tool for educational, rather than punitive purposes.
“Advances in artificial intelligence continue to change the nature of graduates’ current and future work practices, skills and … education needs,” a spokesperson said.
“We should not assume AI is always a threat … as part of our approach to the ethical use of generative AI.”
UNSW has provided staff access to the Turnitin tool as one method of picking up suspected unauthorised use of AI but said changing the design of assessments remained the most effective way to limit its use.
“We recognise that students should not be overly dependent on technology, and independent thought and knowledge remain essential.”
An AI expert, Prof Toby Walsh, said it was right for universities to be cautious as the tool only gave the probability assessments were written by AI rather than traditional plagiarism, which could link to specific websites.
“It’s not going to be adequate to protect universities,” he said. “There are more constructive ways to embrace the tools because it’s going to be an arms race, and AI is going to be integrated into everything we use.”