History suggests that societies generally overestimate the short-term implications of new technologies while underestimating longer-term ones. Current experience with artificial intelligence – the technology enabled by machine-learning – suggests we are getting it the other way round this time. There’s too much talk about the putative “existential risk” to humanity posed by AI, and too little about our experience of it so far and corporate plans for exploiting the technology.
Although AI has been hiding in plain sight for a decade, it took most people by surprise. The appearance of ChatGPT last November signalled that the world had stumbled upon a powerful new technology. Not for nothing is this new “generative AI” called “foundational”: it provides the base on which the next wave of digital innovation will be built.
It is also transformational in innumerable ways: it undermines centuries-old conceptions of intellectual property, for example, and it has the potential radically to increase productivity, reshape industries, change the nature of some kinds of work and so on. On top of that, though, it also raises troubling questions about the uniqueness of humans and their capabilities.
The continuing dispute between the Hollywood studios and screenwriters’ and actors’ unions perfectly exemplifies the extent of the challenges posed by AI. Both groups are up in arms about the way online streaming has reduced their earnings. But the writers also fear their role will be reduced simply to rewriting AI-generated scripts; and actors are concerned that detailed digital scanning enabled by new movie contracts will allow studios to create persuasive deepfakes of them that studios will be able to own and use “for the rest of eternity, in any project they want, with no consent and no compensation”.
So this technology isn’t just a better mousetrap: it’s more like steam or electricity. Given that, the key question for democracies is: how can we ensure AI is used for human flourishing rather than corporate gain? On this question, the news from history is not good. A recent seminal study by two eminent economists, Daron Acemoglu and Simon Johnson, of 1,000 years of technological progress shows that although some benefits have usually trickled down to the masses, the rewards have – with one exception – invariably gone to those who own and control the technology.
The “exception” was a period in which democracies fostered countervailing powers – civil-society organisations, free media, activists, trade unions and other progressive, technically informed institutions that supplied a steady flow of ideas about how technology could be repurposed for social good rather than exclusively for private profit. This is the lesson from history that societies confronted by the AI challenge need to relearn.
There are some signs that the penny may finally have dropped. The EU, for example, has an ambitious and far-reaching AI Act that is making its way through the union’s processes. In the US, the National Institute of Standards and Technology has published an impressive framework for managing the risks of the technology. And the Biden administration recently published a “Blueprint for an AI Bill of Rights”, which looks impressive but is essentially just a list of aspirations that some of the big tech companies claim to share.
It’s a start – provided governments don’t forget that leaving the implementation of powerful new technologies solely to corporations is always a bad idea.