Probably the best software program for impersonating humans ever released to the public is ChatGPT. Such is its appeal that within days of its launch last week, the boss of the artificial intelligence company behind the chatbot, OpenAI, tweeted that 1 million people had logged on. Facebook and Spotify took months to attract that level of engagement. Its allure is obvious: ChatGPT can generate jokes, craft undergraduate essays and create computer code from a short writing prompt.
There’s nothing new in software that produces fluent and coherent prose. ChatGPT’s predecessor, the Generative Pretrained Transformer 3 (GPT-3), could do that. Both were trained on an unimaginably large amount of data to answer questions in a believable way. But ChatGPT has been fine-tuned by being fed the data on human “conversations”, which significantly increased the truthfulness and informativeness of its answers.
Even so, ChatGPT still produces what its makers admit will be “plausible-sounding but incorrect or nonsensical answers”. This might be a big problem on the internet, as many web platforms don’t have the tools needed to protect themselves against a flood of AI-generated content. Stack Overflow, a website where users can find answers to programming questions, banned ChatGPT-produced posts, as its human moderators could not deal with the volume of believable but wrong replies. Dangers lurk in giving out tools that could be used to mass produce fake news and “trolling and griefing” messages.
Letting loose ChatGPT raises the question of whether content produced after December 2022 can be truly trusted. A human author is liable for their work in a way AI is not. Artificial intelligence is not artificial consciousness. ChatGPT does not know what it is doing; it is unable to say how or why it produced a response; it has no grasp of human experience; and cannot tell if it is making sense or nonsense. While OpenAI has safeguards to refuse inappropriate requests, such as to tell users how to commit crimes, these can be circumvented. AI’s potential for harm should not be underestimated. In the wrong hands, it could be a weapon of mass destruction.
A paper this year showed what could happen when a simple machine-learning model meant to weed out toxicity was repurposed to seek it out. Within hours it came up with 40,000 substances, including not only VX nerve gas but also other known chemical weapons, as well as many completely new potential toxins. Stuxnet, a cyberweapon built by the US and Israel, was used to sabotage centrifuges used by Iran’s nuclear programme more than a decade ago. No one knows what will happen to such technologies if the software engineers of the future will themselves be software programs.
GPT-3 could regurgitate lines of code but OpenAI improved it to create Codex, a program that could write software.When computer scientists entered Codex into exams alongside first-year students, the software outperformed most of its human peers. “Human oversight and vigilance is required,” OpenAI’s researchers have warned. That injunction should also apply to ChatGPT. The EU has gone a long way to provide protections for citizens from potentially harmful uses of AI. Britain’s approach, so far, offers little – a worry as science fiction becomes science fact.
• This article was amended on 9 December 2022 because an earlier version said “GPT-3 could not write a line of code”. To clarify: GPT-3 could regurgitate lines of code but OpenAI improved it to create Codex, a program that could write software.