Banned in Italy, under investigation in France and Canada, the artificial intelligence program ChatGPT can produce articles, essays and poems nearly as well as most people – though its output isn't always factually accurate. Can regulators outsmart the machine before it decides to write us out of the story altogether?
ChatGPT is a writing program based on artificial intelligence. It was launched by the US firm OpenAI last November. Less than three months down the road, it had 100 million registered users.
The Microsoft version is called Bing. Google call theirs Bard.
I asked ChatGPT to produce an article of 250 words in "standard journalistic style" on the potential and dangers of writing programs based on artificial intelligence.
This is what I got, exactly as the machine offered it. It took less than 20 seconds for it to appear on my computer screen.
This article as written by a bot:
As artificial intelligence (AI) continues to evolve and improve, it has become increasingly capable of performing tasks that were once reserved exclusively for human beings. One area in which AI has made significant strides is in writing, with programs like GPT-3 now able to generate high-quality prose that is difficult to distinguish from that written by a human.
While the capabilities of AI writing programs are impressive, they also come with significant dangers. One of the most pressing concerns is the potential for these programs to be used for malicious purposes, such as creating fake news or propaganda that is indistinguishable from genuine articles. This could be used to sway public opinion or spread disinformation, causing harm on a massive scale.
Another concern is the potential for AI writing programs to be used for plagiarism. With the ability to generate high-quality content quickly and easily, it becomes all too tempting for unscrupulous individuals to use AI to create content for their own purposes, whether that be to promote their own products or to pass off someone else's work as their own.
Finally, there is the risk that AI writing programs will be used to automate jobs traditionally performed by human writers. While this may be beneficial from a cost-saving perspective, it has the potential to result in significant job losses, particularly in industries that rely heavily on content creation.
In conclusion, while AI writing programs undoubtedly have the potential to revolutionize the way we write and consume content, it is essential that we remain aware of the potential dangers that come with their use. By taking steps to mitigate these risks and ensure that AI is used ethically and responsibly, we can harness its power for good and avoid the pitfalls that come with any new technology.
Vast collection of personal data
One problem which the software does not mention is that it is based on the collection of vast amounts of data, much of it personal, which it then draws upon without the approval of the original source.
Canada's federal privacy commissioner this week opened an investigation following a complaint alleging "the collection, use and disclosure of personal information without consent".
The Guardian newspaper in London has set up a special task force to deal with the fact that sites such as ChatGPT are inventing references to non-existent articles, crediting real journalists with work they never did – and doing it well enough to confuse the journalist named as the writer.
The dangers, for credibility, for legal responsibility and for professional reputations, are mind-blowing.
Australian mayor Brian Hood has threatened to sue parent company OpenAI after the web app wrongly implicated him in a bribery and corruption scandal, a crime he actually blew the whistle on.
"It's one thing to get something a little bit wrong; it's entirely something else to be accusing someone of being a criminal and having served jail time when the truth was the exact opposite," he said.
Scramble to regulate
Italian regulators said last Friday that the firm had no legal basis to engage in massive data collection and questioned the way it was handling the information it had gathered. The software is now banned in Italy.
European authorities including those of France, Ireland and Germany have since approached their Italian counterpart in an effort to establish a common position.
The French National Council for Freedom of Information (CNIL) has already received two complaints.
Zoe Vilain of Janus International, a campaign group, filed the first, saying that when she tried to sign up for a ChatGPT account, she was not asked for consent to any general terms of use or privacy policy.
Nous avons porté plainte devant la CNIL hier contre #OpenAI #ChatGPT suite à une demande d’accès à mes données personnelles restée sans réponse. Retrouvez les infos dans @LInforme_ _ avec @reesmarc https://t.co/tGxc1CrtDk
— Zoe Vilain 🇪🇺 (@Zoe_Vilain) April 5, 2023
The other complaint came from David Libeau, a developer who wrote in his submission he had found personal information about himself when he asked ChatGPT about his profile.
"When I asked for more information, the algorithm started to make up stories about me, creating websites or organising online events that were totally false," he wrote.
ChatGPT and similar programs are "trained" on huge bodies of text gleaned from the internet and are known to invent answers, though OpenAI said such "hallucinations" are less common with GPT-4, the latest version of the software.
OpenAI insists that it is "committed to protecting people's privacy" and believes the tool complies with existing laws.