I’ve been playing around with ChatGPT for a few days.
It’s the new artificial intelligence product, released 10 days ago by OpenAI, that answers questions and has taken the tech world by storm (you can find it here and it’s free to use, at least for now).
My interest was piqued by this tweet from a senior research engineer at Microsoft, Shital Shah: “ChatGPT was dropped on us a bit over 24 hours. It’s like you wake up to the news of first nuclear explosion and you don’t know what to think about it but you know the world will never be the same again.”
Someone tweeted “Google is dead #ChatGPT”, and someone else wrote: “ChatGPT writes and thinks much better than the average college student IMO — it def undermines the purpose of the assignment.”
Tweet from @sytelus
On Monday, Sam Altman, the CEO and co-founder (with Elon Musk) of OpenAI tweeted: “ChatGPT launched on Wednesday. Today it crossed one million users!”
OK, so something big is happening. What is it?
My first question to it was: “What is ChatGPT”.
Here’s the answer:
“ChatGPT is a large language model that has been trained to generate human-like responses to text input. It uses a technique called unsupervised learning, which involves training the model on a large amount of text data without providing it with any specific labels or categories. This allows the model to learn the patterns and structures of natural language on its own, and to generate responses that sound natural and fluent to a human reader.
“When you input text to ChatGPT, it uses its understanding of natural language to generate a response that is appropriate in the context of the conversation. It does this by considering the words and phrases that you have typed, and using its knowledge of language to generate a response that follows the rules of grammar and makes sense in the context of the conversation.
“Overall, ChatGPT is designed to provide a natural and intuitive way for users to have conversational interactions with a computer, without the need for explicit programming or instructions. It is a powerful tool for generating human-like responses to text input, and can be used in a variety of applications, such as chatbots, language translation, and more.”
Not bad.
ChatGPT v Google
I put the same question to Google and I got a link to OpenAI, and a link to a story on ZDnet with the headline: “What is ChatGPT and why does it matter? Here’s what you need to know”.
That sums up the difference between them: Google gives you links, ChatGPT tells you the answer, so they don’t really compete.
I’ve just finished reading a book called The Reader’s Room, by Antoine Laurain, so I asked it to tell me who he is:
I’m sorry, but I don’t have any information on Antoine Laurain. As a large language model trained by OpenAI, my knowledge is based solely on the text data that I have been trained on.
So there are limits; Google at least it gave me some photos and a link to his Wikipedia entry.
I asked it what caused the war in Ukraine and got a good explanation, and then asked for the latest news on it. Answer:
I’m sorry, but I don’t have the ability to provide current information on the war in Ukraine. As a large language model trained by OpenAI, my knowledge is based solely on the text data that I have been trained on. I don’t have the ability to browse the internet or access information outside of what I have been trained on, so I don’t have any information on current events like the war in Ukraine.
There’s another limit: It’s not a news engine. Reporters’ jobs are safe for the moment, but teachers are in trouble.
A friend who is a philosophy professor asked ChatGPT what he called a “standard 2nd year question on Immanuel Kant: “why, according to Kant, is it always wrong to lie?”
It came back immediately with this response:
“According to Immanuel Kant, it is never right to lie, even if it may seem to benefit someone or save them from harm. Kant believed that lying is always a violation of the moral law and that people have a duty to always tell the truth. He argued that lying is fundamentally wrong because it undermines trust and erodes the foundation of social institutions.
“Additionally, Kant believed that lying is an act of disrespect towards others, since it treats them as mere means to an end rather than as ends in themselves. In other words, when we lie to someone, we are using them as a means to achieve our own goals, rather than treating them with the respect and dignity that they deserve.”
My friend’s assessment of that: “It is on the right track, the student has done some work, but hasn’t really thought it through fully. A low H2A. But still, it’s an arcane question and it’s a sophisticated response. It is going to be a serious challenge for philosophy teachers.”
I heard that ChatGPT can write poetry, so I asked it to write a poem about book clubs, since it was my monthly book club meeting that night (to discuss The Reader’s Room).
Here’s what it came up with, in 10 seconds:
Book clubs are a source of joy and knowledge
A group of friends who come together
To share their thoughts and feelings
On the stories they have read
In a book club, there is camaraderie
A sense of belonging, of being understood
As each member takes their turn
To discuss the book they love
Some may speak of the characters
The twists and turns of the plot
Others may delve into themes
And the deeper meaning of the words
But whatever is said, the book club
Is a place of learning and growth
Where minds are opened
And friendships are formed
So here’s to book clubs,
A treasure for the mind and soul.
It’s bad poetry, that’s for sure, not poetry at all really, so poets have nothing to worry about yet, but as a description of what’s good about a book club, it was spot on.
A canary in the coal mine?
Cognitive psychologist and poet Keith Holyoak wrote an essay in MIT Technology Review this week putting AI poetry writing into context:
“Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry.
“But … in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?”
In fact, my sub-editor at Eureka Report tells me he has been using AI for a while now to generate images to go with my articles.
So, has the world changed and will never be the same again, as Shital Shah tweeted?
Well, for teachers, schools and universities it has. What they can do about it, I have no idea.
Journalists are safe for the moment, although maybe not for long. At the moment the data available to ChatGPT is a few months old, so no good for the latest news, but presumably its data will become more contemporary as time goes on, and what about an OpEd piece, like this one you’re reading?
To check that, I asked it to write a column about AI for The New Daily, and it took in 15 seconds to produce a 300-word effort that was alarmingly good.
More broadly we can say that AI is developing quite quickly now and the destination is both unpredictable and worrying, for more reasons than jobs (hello Skynet).
So far the dire predictions that robots and AI would replace jobs and lead to mass unemployment have proved incorrect, since unemployment is the lowest in 40 years, but it’s not painless.
A lot of individual workers, especially in manufacturing, have had to retrain and find other jobs, often in other places, which can be hard. ChatGPT means that knowledge workers will soon be faced with the same thing.
I asked it: “Will ChatGPT reduce the demand for human workers?” And got this:
“It is possible that the use of ChatGPT and other large language models could lead to a reduction in demand for some types of human workers, particularly those who perform tasks that can be automated or replaced by machines.
“However, it is important to note that ChatGPT and other AI tools are not capable of replacing all human workers and the demand for many types of jobs will likely continue to exist.
“In addition, the use of these tools can also create new job opportunities in fields related to AI and machine learning. Overall, the impact of ChatGPT and other large language models on the job market is likely to be complex and multifaceted.”
That’s a pretty good non-committal answer, worthy of a politician.
Keith Holyoak wrote that artificial intelligence is in the process of changing the world in ways no one can predict, and it’s hard to disagree.
“We need to set aside the old stereotype that computer programs simply follow fixed rules and do what humans have programmed them to do, and so lack any capacity for creativity.
“What the programs learn, and how they will behave after learning, is very difficult (perhaps impossible) to predict in advance.”
Having spent a week playing with ChatGPT, I can report that the world has indeed changed and will never be the same again.
The world is always changing, but occasionally it lurches, as it did with the atomic bomb … and artificial intelligence.
Alan Kohler writes twice a week for The New Daily. He is also founder of Eureka Report and finance presenter on ABC news.