A large group of technology leaders and geeks, including Bill Gates and Sam Altman, the CEO of ChatGPT’s owner OpenAI, uttered the word “extinction” last week in describing the dangers of artificial intelligence.
Extinction? Really? Good God! I knew there were some issues, but Terminator 3 and that terrifying robot woman in maroon leather? This requires attention.
The open letter with 350 signatures was brief: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Some have suggested it’s a cynical attempt by the AI industry to distract us from what they’re getting up to now by making us worry instead about a distant future extreme.
For example, Emily Bender from the University of Washington, tweeted: “When the AI bros scream ‘Look, a monster!’ to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem), we should make like Scooby-Doo and remove their mask.”
Others who take the threat more seriously say they’re not talking about a self-aware Skynet removing humans from the planet with a nuclear holocaust, but rather the fact that while the internet democratised knowledge, AI could do it with scientific expertise.
That is, the ability to create a lethal pathogen or mass weapon might be available to a suicidal maniac who thinks it would be splendid to take everyone on the planet with him, instead of just his wife and kids, or other kids down at the local school.
But cynical or real, it would be a mistake to let the perfectly bad (possible extinction) drive out the bad, to adapt the aphorism about letting the perfect drive out the good.
China takes the lead
One of the more interesting examples of regulating artificial intelligence in the world is from China, which also leads the United States in developing it.
A good, comprehensive draft code was published in April and apart from the usual stuff about reflecting Socialist Core Values, not subverting state power, overturning of the socialist system or incitement of separatism, it requires anyone planning to use AI to submit a security assessment to the state cyberspace and information department before proceeding.
I’m not sure that getting pre-approval for every AI application would work in a democracy, but it’s a reminder that AI is a risky business almost entirely developed and operated by private businesses that are competing with other each other to make tons of money.
There’s another risky business in which private companies compete to make tons of money – banking.
Every country in the world has a big bureaucracy designed to regulate and watch over the banks – in Australia it’s the $226.5 million-a-year APRA. And after the GFC, when the banks most recently demonstrated how risky they are, bank regulations and the budgets of regulators, like APRA, were significantly increased.
In fact the entire history of bank regulation has been a matter of mopping up after disasters, starting with the tulip bubble in 17th-century Holland. The US Federal Reserve system was created after the Knickerbocker Trust crisis of 1907. Australia followed four years later.
In a way, last week’s succinct warning from the 350 geeks was just a suggestion that we don’t wait for a disaster this time and get our regulating act together now.
Nicholas Davis, industry professor of emerging technology and co-director at the Human Technology Institute at the University of Technology Sydney, says most of the laws are already in place and just need to be updated and enforced.
For example, he says, Australia’s privacy laws are 20 years out of date. Also, small business exemptions make no sense.
Voluntary or enforced?
One day after the “extinction” warning last week, the Minister for Industry and Science Ed Husic released a discussion paper called ‘Safe and responsible AI in Australia’, calling for submissions by July 26.
The paper is mostly waffle, of course, but the main issue to be resolved is whether Australia sticks with its current approach of voluntary guidelines, like the US and UK, or follows Europe and China with binding regulation.
Here’s my submission: Regulation that is voluntary, or even weak, might as well not exist.
Before the creation in 1911 of Australia’s first central bank, the Commonwealth Bank, banking regulation in Australia was voluntary, which led to the crisis of 1893, with the collapse of many banks, the evaporation of a lot of money and an economic depression.
Banking regulation then evolved in lurches that coincided with subsequent crises.
There’s also a lot of discussion at the moment about whether Australia is getting left behind in the race to develop AI, not just regulate it.
Small-change investment
In his press release accompanying the federal budget in May, Mr Husic spruiked that the “government has committed $41.2 million to support the responsible deployment of AI in the national economy”.
Compared with the US government’s $5 billion they might as well put that money back in the coin jar.
I asked Google’s AI machine, Bard, for the total global spending on AI in 2022. Answer: $117.6 billion. (ChatGPT apologised and said it didn’t know). Then I asked Bard for Australia’s total spend in 2022, and the answer was $2 billion.
Australia’s share of world GDP is 1.6 per cent. That $2 billion is 1.7 per cent of $117.6 billion, so leaving aside the government’s puny effort we seem to be doing fine.
But to be honest, I’m not sure we need to spend anything on AI at all, except on its use and regulation.
All technology is global now; I use Bard and ChatGPT, and we will all be using AI soon, whether we know it or not – we already are, in fact. We don’t really need Australian Bard (Banjo Paterson?): It’s unlikely to be cheaper, and the American ones do Aussie just fine.
But I do think we should regulate it like banking, and get ahead of the game instead of waiting for a crisis. And we’ll probably need a bureaucracy the size of APRA to do it.
Alan Kohler is founder of Eureka Report and finance presenter on ABC news. He writes twice a week for The New Daily