Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
National
Jeff Sparrow

‘Full-on robot writing’: the artificial intelligence challenge facing universities

Illustration on feature about AI use in universities.
Artificial intelligence has been banging on universities’ doors for some time now. Illustration: Victoria Hart/Guardian Design

“Waiting in front of the lecture hall for my next class to start, and beside me two students are discussing which AI program works best for writing their essays. Is this what I’m marking? AI essays?”

The tweet by historian Carla Ionescu late last month captures growing unease about what artificial intelligence portends for traditional university assessment. “No. No way,” she tweeted. “Tell me we’re not there yet.”

But AI has been banging on the university’s gate for some time now.

In 2012, computer theorist Ben Goertzel proposed what he called the “robot university student test”, arguing that an AI capable of obtaining a degree in the same way as a human should be considered conscious.

Goertzel’s idea – an alternative to the more famous “Turing test” – might have remained a thought experiment were it not for the successes of AIs employing natural language processing (NLP): most famously, GPT-3, the language model created by the OpenAi research laboratory.

Two years ago, computer scientist Nassim Dehouche published a piece demonstrating that GPT-3 could produce credible academic writing undetectable by the usual anti-plagiarism software.

“[I] found the output,” Dehouche told Guardian Australia, “to be indistinguishable from an excellent undergraduate essay, both in terms of soundness and originality. [My article] was initially subtitled, ‘The best time to act was yesterday, the second-best time is now’. Its purpose was to call for an urgent need to, at the very least, update our concepts of plagiarism.”

Ben Goertzel
Ben Goertzel proposed what he called the ‘robot university student test’, arguing that an AI capable of obtaining a degree in a same ways as a human should be considered conscious. Photograph: Horacio Villalobos/Corbis/Getty Images

He now thinks we’re already well past the time when students could generate entire essays (and other forms of writing) using algorithmic methods.

“A good exercise for aspiring writers,” he says, “would be a sort of reverse Turing test: ‘Can you write a page of text that could not have been generated by an AI, and explain why?’ As far as I can see, unless one is reporting an original mathematics theorem and its proof, it is not possible. But I would love to be proven wrong.”

Many others now share his urgency. In news and opinion articles, GPT-3 has convincingly written on whether it poses a threat to humanity (it says it doesn’t), and about animal cruelty in the styles of both Bob Dylan and William Shakespeare.

A 2021 Forbes article about AI essay writing culminated in a dramatic mic-drop: “this post about using an AI to write essays in school,” it explained, “was written using an artificial intelligence content writing tool”.

Of course, the tech industry thrives on unwarranted hype. Last month S Scott Graham in a piece for Inside Higher Education described encouraging students to use the technology for their assignments with decidedly mixed results. The very best, he said, would have fulfilled the minimum requirements but little more. Weaker students struggled, since giving the system effective prompts (and then editing its output) required writing skills of a sufficiently high level to render the AI superfluous.

“I strongly suspect,” he concluded, “full-on robot writing will always and forever be ‘just around the corner’.”

That might be true, though only a month earlier, Slate’s Aki Peritz concluded precisely the opposite, declaring that “with a little bit of practice, a student can use AI to write his or her paper in a fraction of the time that it would normally take”.

Nevertheless, the challenge for higher education can’t be reduced merely to “full-on robot writing”.

Universities don’t merely face essays or assignments entirely generated by algorithms: they must also adjudicate a myriad of more subtle problems. For instance, AI-powered word processors habitually suggest alternatives to our ungrammatical phrases. But if software can algorithmically rewrite a student’s sentence, why shouldn’t it do the same with a paragraph – and if a paragraph, why not a page?

At what point does the intrusion of AI constitute cheating?

Deakin University’s Prof Phillip Dawson specialises in digital assessment security.

He suggests regarding AI merely as a new form of a technique called cognitive offloading.

“Cognitive offloading,” he explains, is “when you use a tool to reduce the mental burden of a task. It can be as simple as writing something down so you don’t have to try to remember it for later. There have long been moral panics around tools for cognitive offloading, from Socrates complaining about people using writing to pretend they knew something, to the first emergence of pocket calculators.’

Dawson argues that universities should make clear to students the forms and degree of cognitive offloading permitted for specific assessments, with AI increasingly incorporated into higher level tasks.

“I think we’ll actually be teaching students how to use these tools. I don’t think we’re going to necessarily forbid them.”

The occupations for which universities prepare students will, after all, soon also rely on AI, with the humanities particularly affected. Take journalism, for instance. A 2019 survey of 71 media organisations from 32 countries found AI already a “significant part of journalism”, deployed for news gathering (say, sourcing information or identifying trends), news production (anything from automatic fact checkers to the algorithmic transformation of financial reports into articles) and news distribution (personalising websites, managing subscriptions, finding new audiences and so on). So why should journalism educators penalise students for using a technology likely to be central to their future careers?

University students
‘The occupations for which universities prepare students will, after all, soon also rely on AI, with the humanities particularly affected.’ Photograph: Dean Lewins/AAP

“I think we’ll have a really good look at what the professions do with respect to these tools now,” says Dawson, “and what they’re likely to do in the future with them, and we’ll try to map those capabilities back into our courses. That means figuring out how to reference them, so the student can say: I got the AI to do this bit and then here’s what I did myself.”

Yet formulating policies on when and where AI might legitimately be used is one thing – and enforcing them is quite another.

Dr Helen Gniel directs the higher education integrity unit of the Tertiary Education Quality and Standards Agency (TEQSA), the independent regulator of Australian higher education.

Like Dawson, she sees the issues around AI as, in some senses, an opportunity – a chance for institutions to “think about what they are teaching, and the most appropriate methods for assessing learning in that context”.

Transparency is key.

“We expect institutions to define their rules around the use of AI and ensure that expectations are clearly and regularly communicated to students.”

She points to ICHM, the Institute of Health Management and Flinders Uni as three providers now with explicit policies, with Flinders labelling the submission of work “generated by an algorithm, computer generator or other artificial intelligence” as a form of “contract cheating”.

But that comparison raises other issues.

In August, TEQSA blocked some 40 websites associated with the more traditional form of contract cheating – the sale of pre-written essays to students. The 450,000 visits those sites received each month suggests a massive potential market for AI writing, as those who once paid humans to write for them turn instead to digital alternatives.

Research by Dr Guy Curtis from the University of Western Australia found respondents from a non-English speaking background three times more likely to buy essays than those with English as a first language. That figure no doubt reflects the pressures heaped on the nearly 500,000 international students taking courses at Australian institutions, who may struggle with insecure work, living costs, social isolation and the inherent difficulty of assessment in a foreign language.

But one could also note the broader relationship between the expansion of contract cheating and the transformation of higher education into a lucrative export industry. If a university degree becomes merely a product to be bought and sold, the decision by a failing student to call upon an external contractor (whether human or algorithmic) might seem like simply a rational market choice.

It’s another illustration of how AI poses uncomfortable questions about the very nature of education.

Ben Goertzel imagined his “robot university student test” as a demonstration of “artificial general intelligence”: a digital replication of the human intellect. But that’s not what NLP involves. On the contrary, as Luciano Floridi and Massimo Chiriatti say, with AI, “we are increasingly decoupling the ability to solve a problem effectively … from any need to be intelligent to do so”.

Bob Dylan
GPT-3 has convincingly written on whether it poses a threat to humanity, and about animal cruelty in the styles of both Bob Dylan, pictured, and William Shakespeare. Photograph: TT News Agency/Alamy

The new AIs train on massive data sets, scouring vast quantities of information so they can extrapolate plausible responses to textual and other prompts. Emily M Bender and her colleagues describe a language model as a “stochastic parrot”, something that “haphazardly [stitches] together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning”.

So if it’s possible to pass assessment tasks without understand their meaning, what, precisely, do the tasks assess?

In his 2011 book For the University: Democracy and the Future of the Institution, the University of Warwick’s Thomas Docherty suggests that corporatised education replaces open-ended and destabilising “knowledge” with “the efficient and controlled management of information”, with assessment requiring students to demonstrate solely that they have gained access to the database of “knowledge” … and that they have then manipulated or “managed” that knowledge in its organisation of cut-and-pasted parts into a new whole.

The potential proficiency of “stochastic parrots” at tertiary assessment throws a new light on Docherty’s argument, confirming that such tasks do not, in fact, measure knowledge (which AIs innately lack) so much as the transfer of information (at which AIs excel).

To put the argument another way, AI raises issues for the education sector that extend beyond whatever immediate measures might be taken to govern student use of such systems. One could, for instance, imagine the technology facilitating a “boring dystopia”, further degrading those aspects of the university already most eroded by corporate imperatives. Higher education has, after all, invested heavily in AI systems for grading, so that, in theory, algorithms might mark the output of other algorithms, in an infinite process in which nothing whatsoever ever gets learned.

But maybe, just maybe, the challenge of AI might encourage something else. Perhaps it might foster a conversation about what education is and, most importantly, what we want it to be. AI might spur us to recognise genuine knowledge, so that, as the university of the future embraces technology, it appreciates anew what makes us human.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.