It’s getting close to the beginning of term. Parents are starting to fret about lunch packs, school uniforms and schoolbooks. School leavers who have university places are wondering what freshers’ week will be like. And some university professors, especially in the humanities, will be apprehensively pondering how to deal with students who are already more adept users of large language models (LLMs) than they are.
They’re right to be concerned. As Ian Bogost, a professor of film and media and computer science at Washington University in St Louis, puts it: “If the first year of AI college ended in a feeling of dismay, the situation has now devolved into absurdism. Teachers struggle to continue teaching even as they wonder whether they are grading students or computers; in the meantime, an endless AI cheating and detection arms race plays out in the background.”
Predictably, that arms race is already heating up. The Wall Street Journal reported recently that: “OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper. The company hasn’t released it despite widespread concerns about students using artificial intelligence to cheat.” This refusal infuriates those sectors of academia that touchingly imagine that there must be a technical fix for the “cheating” problem. Clearly they haven’t read the Association for Computing Machinery’s statement on principles for the development of systems to detect generative AI content, which says: “reliably detecting the output of generative AI systems without an embedded watermark is beyond the current state of the art, which is unlikely to change in a projectable timeframe”. And digital watermarks, though useful, can also be problematic.
LLMs are a burning issue for the humanities in particular because the essay is such a key pedagogical tool for teaching students how to research, think and write. More importantly, perhaps, the essay also plays a central role in how they are graded and assessed. The bad news is that LLMs threaten to make that venerable pedagogy untenable. And that there’s no technical fix in sight.
The good news is that the problem isn’t insoluble – if educators in these disciplines are willing to rethink and adapt their teaching to the new reality. Other pedagogies are available. But they require, if not a change of heart, two changes of mindset.
The first is an acceptance that LLMs – as the distinguished Berkeley psychologist Alison Gopnik puts it – are “cultural technologies”, like writing, print, libraries and internet search. In other words, they are tools for human augmentation, not replacement.
Second, and more importantly perhaps, is a need to reinforce in students’ minds the importance of writing as a process. I think it was EM Forster who once said that there are two kinds of writer: those who know what they think and write it; and those who find out what they think by trying to write it. The vast majority of humanity belongs in the latter camp – which is why the process of writing is so good for the intellect. It forces one to figure out lines of argument that are coherent, choose evidence that is relevant, find sources of information and inspiration that are useful, and – most importantly – learn the craft of expressing yourself in sentences that are readable and clear. For many people it isn’t easy and doesn’t come naturally – which is why students resort to ChatGPT even when asked to write 500 words to introduce themselves to their classmates.
Josh Brake, an American academic who writes wisely about engaging with AI rather than trying to “integrate” it into the classroom, thinks that it’s worth making abundantly clear to students the value of writing as an intellectual activity. “If your students didn’t already see the value of writing as a process by which you think, then of course they will be curious about farming the labour out to an LLM. And if writing (or any other task for that matter) is truly only about the product, then why not? If the means to the end are unimportant, then why not outsource it?”
In the end, the problem that LLMs pose for academia can be solved, but doing so will require fresh thinking and a different approach to teaching and learning in some disciplines. The bigger problem is the glacial pace at which universities tend to move. I know this from experience. Way back in October 1995, the American scholar Eli Noam published a very perceptive article – “Electronics and the Dim Future of the University” – in Science. Between 1998 and 2001, I asked every UK vice-chancellor or senior university administrator I met what they thought of it. Blank stares all round.
Still, things have improved since then: at least now they’ve all heard of ChatGPT.
What I’ve been reading
Online crimes
Ed West has written an interesting blogpost about sentencing for online posts during the riots that followed the Southport stabbings, highlighting the inconsistency of the British justice system.
Loose Bannon
There’s a fascinating interview in the Boston Review with the documentarian Errol Morris on Steve Bannon’s dangerous “dharma” – his sense of being part of an inevitable unfolding of history.
Online oblivion
There’s a sobering article by Niall Firth in the MIT Technology Review on efforts to conserve digital history for posterity in an ever-growing universe of data.