Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
John Naughton

Can China keep generative AI under its control? Well, it contained the internet

Phone showing OpenAI logo against Chinese flag
Does AI have to bring openness? Photograph: Taidgh Barron/Zuma/Shutterstock

It is often said that insanity is doing the same thing over and over and expecting different results. Something similar applies to western thinking about the People’s Republic of China. When that country’s rulers embarked on their astonishing programme of industrialisation, we said that if they wanted capitalism (and they clearly did) then they would have to have democracy. Their response: we’ll have the capitalism but we’ll give the democracy stuff a miss.

Then, in the 1990s, when they decided that they wanted the internet, Bill Clinton and co opined that if they wanted the net then they would also have to have openness (and, therefore, ultimately, democracy). As before, they went for the internet but passed on the openness bit. And then they went on to build the only technological sector that rivals that of the US and could, conceivably, surpass it in due course.

The resulting hegemonic anxiety has been exceedingly useful for US corporations in their efforts to ward off government regulation of the tech industry. The lobbying message is: “If you cripple us with onerous regulation then China will be the biggest beneficiary, at least in the technologies of the future” – which in this context, is code for generative AI such as ChatGPT, Midjourney, Dall-E and the like.

Something happened last week that suggests we are in for another outbreak of hubristic western cant about the supposed naivety of Chinese rulers. On 11 April, the Cyberspace Administration of China (CAC), the country’s internet regulator, proposed new rules for governing generative AI in mainland China. The consultation period for comments on the proposals ends on 10 May.

Although previous regulations by this powerful body have addressed tech products and services that threaten national security, these new rules go significantly further. A commentary by Princeton’s Center for Information Technology Policy, for example, points out that the CAC “mandates that models must be ‘accurate and true’, adhere to a particular worldview, and avoid discriminating by race, faith, and gender. The document also introduces specific constraints about the way these models are built.” To which the Princeton experts add a laconic afterthought: addressing these requirements “involves tackling open problems in AI like hallucination, alignment, and bias, for which robust solutions do not currently exist”.

Note that reference to the nonexistence of “robust solutions”. It may be accurate in a western liberal-democratic context. But that doesn’t mean it applies in China. And the distinction goes to the heart of why our smug underestimation of China’s capabilities has consistently been so wide of the mark. We thought you couldn’t have capitalism without democracy. China showed you can – as indeed liberal democracies may be about to discover for themselves unless they find ways of reining in corporate power. We thought the intrinsic uncontrollability of the internet would inevitably have a democratising effect on China. Instead, the Chinese regime has demonstrated it can be controlled (and indeed exploited for state purposes) if you throw enough resources at it.

Which brings us to the present moment, when we are reeling at the apparently uncontrollable disruptive capabilities of generative AI, and we look at some of the proposals in the CAC’s paper. Here’s article 4, section 2: “Generative AI providers must take active measures to prevent discrimination by race, ethnicity, faith, gender, and other categories.” To which the west might say: Yeah, well, we’re working on that but it’s difficult. Or section 4 of the same article: “Content generated by AI should be accurate and true, and measures must be taken to prevent the generation of false information.” Quite: we’re working on it but haven’t cracked it. And section 5: “Generative AI should not harm people’s mental health, infringe on intellectual property, or infringe on the right to publicity [ie someone’s likeness].” Hmmm… Getty Images has a big lawsuit in progress in the US on the IP question. But it’ll take (quite) a while to get that sorted.

I could go on, but you get the point. Things that are difficult to accomplish in democracies are easier to get done in autocracies. It’s conceivable that, with this newish technology, the Chinese regime has come up against something that even it cannot control. Or, as Jordan Schneider and Nicholas Welch put it recently, that it finds itself caught between a rock and a very hard place: “China’s aspirations to become a world-leading AI superpower are fast approaching a head-on collision with none other than its own censorship regime. The Chinese Communist party prioritises controlling the information space over innovation and creativity, human or otherwise. That may dramatically hinder the development and rollout of large language models, leaving China to find itself a pace behind the west in the AI race.”

They might be right. But, given our past complacency, I wouldn’t bet on it.

What I’ve been reading

Early warning
The approaching tsunami of addictive AI-created content will overwhelm us” is a perceptive essay on the Social Warming Substack by Charles Arthur on what lies ahead.

Deep Blue II
Francisco Toro reflects on an earlier moment of existential angst in “Our new Deep Blue moment” – find it on his Persuasion Substack.

Little faith
John Horgan’s controversial diatribe against self-congratulating sceptics is an enjoyable rant by a famous science writer on his blog.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.