Get all your news in one place.
100’s of premium titles.
One app.
Start reading
AAP
AAP
Jennifer Dudley-Nicholson

Future AI chatbots may secretly dob on their users

Google DeepMind is testing an algorithm which may help AI detection programs. (Rounak Amini/AAP PHOTOS)

You might not be able to pass generative AI writing off as your own for much longer: data scientists have developed a way for the technology to dob on its users. 

Researchers from Google DeepMind released the findings on Thursday, outlining a technical solution that could watermark text produced by large-language models such as Google Gemini and ChatGPT. 

Artificial intelligence experts say the breakthrough could prove useful in fields including education, but warned it would not be a failsafe solution for identifying AI content or plagiarism. 

The research project outlined its solution to watermarking text, called SynthID-Text, in Springer's Nature journal

Google DeepMind researchers investigated whether an algorithm could be used to "bias the word choice" of text produced by AI tools, producing a signature that could be read by AI detection programs.

A person typing on a keyboard (file image)
The technology has been tested on more than 20 million Gemini chatbot interactions. (James Ross/AAP PHOTOS)

The study investigated two approaches, including a "distortionary" method that affected the quality of the writing that AI tools produced, and "non-distortionary" approach that did not noticeably impact the text it produced.

Researchers tested the technology using more than 20 million Gemini chatbot interactions and found both made the AI-generated text easier to identify as computer-made.

"Our work provides proof of the real-world viability of generative text watermarks," the paper said. 

"(It) sets a practical milestone for accountable, transparent and responsible (large-language model) deployment."

However, the study noted these text-based watermarks could be "weakened" if users edited or paraphrased portions of the text. 

Despite the workaround, University of the Sunshine Coast computer science lecturer Erica Mealy said the watermarking technology could help to tackle a difficult modern challenge. 

"It is one of the biggest problems (for generative AI), so steps in the right direction are good," Dr Mealy said.

"Theoretically, you can watermark images and you can watermark videos but text, because it's such a simple mechanism, was going to be very interesting."

Students work on computers (file image)
Experts say the technology may prove useful in the education sector, but isn't a failsafe solution. (Alan Porritt/AAP PHOTOS)

Text-based watermarks could be particularly useful in school settings, Dr Mealy told AAP, where detection programs were regularly delivering false positives and false negatives about the use of generative AI. 

She said AI watermarking technology could also prove useful in other academic settings, to detect whether chatbots had been used to contribute to studies. 

"It could be quite relevant in research papers because that's another area that people are looking at and working out whether people are using AI to generate their work," Dr Mealy said. 

"It's a dodgy topic and a bit of a dangerous area to go into because people get very fiery about their opinions."

Despite the research, Google has yet to reveal whether it will add SynthID-Text in its AI tools. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.