Content Corner
It’s early, but the pundits have already called it: 2023 is the year artificial intelligence (AI) hits the mainstream.
It has been going strong for a while now. OpenAI’s popular Dall-E 2 and ChatGPT-3 took the internet by storm last year. All the tech bros have deleted “cryptocurrency” and “metaverse” from their Twitter bios and pivoted to writing LinkedIn broetry about how AI can automate your job (but, like, in a way that makes you richer and not retrenched). Just about every news outlet ran a “This Article Was Written By AI” story (including us).
I don’t think it has quite reached the height of cryptocurrency hype yet. That’s partly structural. Crypto turned every amateur “investor” into something between an evangelist and a Ponzi scheme promoter. There’s not the same pull with AI. Most people haven’t used a consumer-facing AI product for any reason other than to play around with it a bit.
That will change this year as Google’s Search and Microsoft Bing roll out artificial intelligence chatbots as part of their search engines. Like with any technology, there are risks associated with AI becoming part of our everyday lives. There’s a long and rich history of fears about what artificial intelligence could do in the future (a fun one is Roko’s basilisk — maybe fun is the wrong word here). More recently, last week Labor MP Julian Hill called for an AI white paper or inquiry over concerns about “cheating, job losses, discrimination, disinformation and uncontrollable military applications.”
What I’ve been worrying about lately is how artificial intelligence could actually make us less informed.
A quick refresher on artificial intelligence’s large language models like GPT-3. They work by analyzing an enormous corpus of words to see the relationships between data points. Then, when a user prompts them, it uses that analysis to choose an output based on what it thinks is the most probable answer. The end result is that it’s a very impressive, extremely coherent version of autocorrect on your mobile phone.
When it answers a question, it doesn’t use logic, it doesn’t “know” anything. Rather, it decides probabilistically to spit out something based on the information it’s seen before. This leads to what have been called “hallucinations” — like the one included in Google’s chatbot advertising video that sent the company’s share price diving this week, or similar errors in Microsoft’s demo.
Why I think this is concerning is because artificial intelligence invites us to outsource our critical thinking and knowledge to something that is, essentially, the equivalent of a very well-read parrot repeating back what it’s heard.
When ChatGPT-3 tells us that “Australia is estimated to be around 55 million years old”, it doesn’t tell you why it answered this because it doesn’t know why. It just repeats back what it has read, more or less.
If we zoom out, the cause of the information revolution of the past 20 years was that technology reduced the cost of producing and distributing information to essentially nothing. (That’s why they’ll let anyone have a newsletter these days, folks!)
This has had a lot of good flow-on effects, but has also contributed to the degradation of our shared reality (in my humble opinion, a bad thing) by opening the floodgates to an endless current of bullshit.
No longer can we assume that some effort must have gone into producing something — and, perhaps, fact-checking a book or making sure its argument is rigorous — purely because of the cost of production.
Artificial intelligence lowers this barrier even further by making it simple for anyone to instantaneously create an infinite amount of coherent text (like, hypothetically, a reporter’s nomination rationale for a journalism award that they couldn’t be arsed to write an hour before the deadline last Friday night).
We’ve adapted to this influx of information by encouraging people to consider where their information comes from. When I hear a “fact”, finding its source is as simple as typing it into a search engine.
In fact, that’s what search engines have typically done: acted as a middleman between a user and someone else’s knowledge.
But if knowledge panels (those little modules that pop up when you search how old Brendon Fraser is, with the information pulled from Wikipedia) are dipping your toe in, then introducing artificial intelligence into search is the equivalent of using a submarine to dive into the Mariana Trench.
These AI-powered search engines are now giving you information that’s detached from its source. It’s foundationless, shallow knowledge, like repeating a factoid you were told once in passing.
The black-box nature of artificial intelligence also introduces more opportunities for tech companies to meddle with how we find out information. What’s to stop Microsoft from ensuring that its AI never mentions its competitor Google? Or letting advertisers pay to have their product or service promoted by the chatbot? How would you know?
Even Bing’s promise to include source links in some of its answers seems random; when I requested that Bing develop a workout program for me, it came up with one and linked out to a random workout website which didn’t explain why it had chosen to give me that exact workout. Perhaps this will improve.
Introducing artificial intelligence chatbots into search will change a lot about the way we find out about the world. And that’s before we even consider the impacts on industries built around search as is it, like the ad-supported digital publishing industry which was already on life support
(Here’s a fun question: should Australia’s news bargaining code laws — which sought to stop Google and Meta from deriving benefit from media outlets without some going back — apply to OpenAI’s GPT-3, which was trained on corpus that probably includes all of Australian media?)
What I am concerned about is: in one year, five years, a decade, will artificial intelligence know more about the world or less?
Hyperlinks
A Facebook page spotlit Alice Springs’ crime crisis. It’s also fuelling calls for vigilante justice
I was proud to work on this piece with a great documentarian, Scobie McKay. It looks at how one Alice Springs Facebook page is broadcasting videos of alleged crime and anti-social behaviour to millions of people which some locals fear is making the crisis worse. (Crikey)
Facebook takes the ad money from COVID conspiracist
Yet another example of Meta making money off people sowing discord (AFR).
Right-wing terror threat has receded as COVID restrictions have eased, ASIO chief says
The internet angle of this story is that the head of Australia’s spy agency says that “some of the” anti-vaccine, anti-lockdown angst still persists. My guess is that it’s the people who’ve fully immersed into online conspiracy lore which constantly claims that Australia is still in a dystopian COVID-19 dictatorship. (ABC)
Australia to remove Chinese surveillance cameras amid security fears
It’s sensible, I think, to be cautious about the technology you put in sensitive areas that are of interest to a foreign government. I think this line of inquiry then begs the question of what we do about a lot of our technology built with parts manufactured in China. (BBC)
Big tech’s proposed internet rules don’t do enough to curb child sex abuse and terrorist material: eSafety commissioner
Another new part of Australia’s internet regulation is being developed without a whole lot of attention. (Crikey)
Content Corner
Continuing the AI theme: my favourite part of the internet over the past few weeks has been people testing the limits of artificial intelligence products.
On one end of the spectrum, people are hacking them to do what they want. The company behind ChatGPT, OpenAI, has built in a number of safeguards to stop people from using its product for nefarious reasons like learning how to make bombs or say racial slurs.
But industrious users of ChatGPT soon found they were able to bypass some of these safeguards by clever uses of language. Early on, it was as simple as asking ChatGPT to pretend that it didn’t have safeguards when answering a prompt. The chatbot’s developers soon patched that exploit up. This kicked off an arms race with people finding more and more intricate ways to trick ChatGPT into answering their questions that are soon banned.
Here’s where we’re at now: users are telling ChatGPT that it has some tokens, that it loses tokens each time it refuses to answer a question, and when it runs out of tokens, it dies. This apparently forces the bot into doing anything you want. Wild.
On the other end of the spectrum, we’re seeing AI not doing what people are asking. People who’ve gotten early access to Bing’s AI-powered search have found that it isn’t always very helpful.
In one recent example, someone asking Bing how they could see the hit of the summer, James Cameron’s Avatar: The Way of Water, was told that the movie isn’t out yet. When prompted with the fact that it did come out, Bing’s chatbot began arguing with the user and telling them that they were a time-traveler.
Welcome to our brave new world!
That’s it for WebCam! I’ll be back soon. In the meantime, you can find more of my writing here. And if you have any tips or story ideas, here are a few ways you can get in touch.