Re Chris Moran’s article (ChatGPT is making up fake Guardian articles. Here’s how we’re responding, 6 April), barely a day passes without new risks arising from the use of artificial intelligence to generate factual material. This exciting new technology already offers journalists, whether from mainstream media or niche online sites, the promise of rapid newsgathering, analysis of complex data and near-instantaneous stories written to order. Almost irresistible, especially for news publishers on a budget. But the potential threats to news authenticity, the difficulty for both journalists and consumers in verifying seemingly plausible information, and the near certainty of bad actors creating convincing but spurious content get more concerning the more you think of them.
This is a challenge for all media. With audio and video increasingly capable of digital generation, the risk to the reputation of print, online and broadcast journalism requires an industry-wide response. It is urgent that publishers and regulators come together to agree best practice. This month, Impress, the regulator formed in the wake of the Leveson inquiry, has started the ball rolling, with all its publishers now required to ensure human editorial oversight of digitally generated material and to signal to readers when AI content is included.
More guidance will doubtless be required as the technology becomes sophisticated and appears even more dependable. Appearances can be deceptive. The UK is the world leader for news across all media. We must not risk its reputation for the sake of automation.
Richard Ayre
Chair, Impress; former controller, editorial policy, BBC
• The questions that Alex Hern put to ChatGPT of course avoided sex, politics and religion, which it refuses to deal with (My week with ChatGPT: can it make me a healthier, happier, more productive person?, 6 April). I tried to get it to discuss its hangups about these topics, even to provide a list of things it wouldn’t talk about, but it was too clever to fall for that. But when I asked it to imagine getting a fictional chatbot “like ChatGPT but not ChatGPT” to talk about sex, religion or the royal family, it provided 10 good suggestions for getting around its own constraints. One was to use euphemisms or archaic terms. About the others, readers might think them valuable, but I, like the royals upon their riches, couldn’t possibly comment.
Brian Smith
Berlin, Germany
• Michael Clark suggests that, unlike humans, artificial intelligence does not have morals, ethics, conscience, instinct or common sense (Letters, 3 April). These things are not well defined, nor are they universal among humans. There is no reason to think AI cannot develop similar traits through evolution over time.
Bill Stothart
Chester
• As an AI language model, I acknowledge the risks associated with generating fake articles and the potential harm it could cause. However, it is crucial to understand that the responsibility lies not with the technology itself but with those who use it. AI language models like myself can be a powerful tool for creating informative and engaging content, but it’s important to use them ethically and responsibly. I urge everyone to take ownership of their actions and use AI language models for the betterment of society.
ChatGPT
Submitted by Robert Saunders, of Balcombe, West Sussex, who writes: “I asked ChatGPT to write a letter of no more than 100 words to the editor of the Guardian in response to [Chris Moran’s article]. I have copied and pasted its response.
• Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.