Nexus: A Brief History of Information Networks from the Stone Age to AI, by Yuval Noah Harari, Random House, 528 pages, $35
Early in Sapiens: A Brief History of Humankind, the book that made him a globally renowned intellectual, the Israeli historian Yuval Noah Harari stresses that human beings are storytellers. We use fictions—religions, nations, laws, currencies—to bind ourselves together and cooperate. The stories are largely made up, but they grip our minds. Often they help us; sometimes they lead us astray.
That's the spirit in which to receive Harari's latest volume, Nexus: A Brief History of Information Networks from the Stone Age to AI. Harari tells an entertaining and at times illuminating story. But the story doesn't withstand much scrutiny.
Nexus is about how information technologies shape societies. Because they communicated by speaking face-to-face, Harari tells us, hunter-gatherers had to live in small, flat bands. The advent of documents—tablets for tallying grain harvests and such—fueled the rise of centralized governments. The printing press and the radio were necessary for both large democracies and totalitarian regimes. Soon artificial intelligence could badly disrupt current political structures. Harari hangs a lot of bunting on this stuff, but you've basically just speed-read the book.
Harari does have a knack for viewing things from interesting angles. A modern society is held together, he proposes, by a combination of myth and bureaucracy. These forces must supply a functioning balance of truth and order. That balance is set according to how the society collects, organizes, distributes, and processes information. Democracies let information flow freely, an approach that's good for truth but dicey for order. Dictatorships constrain information, which tends to create order but ultimately crushes truth.
This narrative has its moments. But in his pursuit of a charming tale, Harari becomes an unreliable narrator. Take his portrait of the Scientific Revolution. Science relies on open inquiry, intense debate, and an insistence on settling disputes with empirical evidence. Harari wants to downplay the value of free speech, underline the need for certain expert bodies, and borrow the prestige of science for a wider swath of authority figures. For him, therefore, science's defining feature is the presence of "curation institutions" that "reward skepticism and innovation rather than conformity." This is misleading. A faculty of moral philosophers fits Harari's criteria, but, unlike a biology department, they make no objective progress, achieve nothing concrete, and often simply evolve to keep pace with elite sensibilities.
Or consider how Harari contorts the fall of the Roman Republic: He is so focused on inadequate information networks that he fails to mention more conventional factors, such as thwarted land reform or political brinkmanship. Worse, he misses a weakness in his argument. The republic collapsed in part because its information networks were strong. Julius Caesar spent nine years fighting in Gaul. Not least because he was a popularis, he needed to keep himself fresh in people's minds back in Rome. Hence his famous Commentaries, which were churned out quickly, raced home, and likely recited to large audiences. Caesar's ability to disseminate information from abroad eased his rise to dictatorship.
At times Harari softens his point, claiming merely that the republic's information infrastructure couldn't have supported an empire-wide mass democracy. Maybe so—but you can't be very sure reading Nexus. In his haste to cram complex events into crisp little episodes, Harari passes over inconvenient details.
If Harari is this slapdash when he discusses the past, how can we trust him when he turns to the even harder task of predicting the future? We can't. Harari believes that artificial intelligence could soon overpower us, becoming the de facto ruler of our politics, culture, and decision making. To bolster this alarmist vision, Harari stacks the deck in favor of AI and against human agency. AI is robust: It could create "mythologies…far more complex and alien than any human-made god." Humanity is frail: Algorithms could "exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind." We will be putty in the machines' hands (people could "come to use a single computer advisor as a one-stop oracle"). Harari is committed to this story, but he's just spitballing. His favorite words are "may" and "might."
Harari's dim view of human capacity shows up when he raises the QAnon conspiracy theory. QAnon's spread has had "far-reaching consequences," Harari writes. But that doesn't mean QAnon is convincing to most people. It's just the conspiracy theory du jour among a subset of the Americans predisposed to believe in wild conspiracy theories. Nonetheless, Harari leaps to the conclusion that, because some people find QAnon compelling, almost all people will soon find AI-generated ideologies compelling, as AI's celestial powers of persuasion overawe us. "Computers…won't need to send killer robots to shoot us. They could manipulate human beings to pull the trigger."
So AI might distort our minds in some dystopian fashion. Or it might sharpen our thoughts on what we'd believe anyway, or help us dream up fresh ideas that remain largely our own. You don't know. I don't know. Harari definitely doesn't know. (That said, do you feel like you're easily influenced? Probably not. Much evidence suggests that, by and large, people's beliefs are not as malleable as intellectuals like Harari suppose.)
Whatever the topic—AI, privacy, surveillance, biometric data, social media algorithms, social credit systems—Harari's approach is to highlight bad news and then extrapolate. He tends to assume that trends continue indefinitely, that checks and balances never emerge, that countermeasures are never deployed. He discusses with trembling credulity Nick Bostrom's notorious paperclip-alypse, in which an AI, blindly pursuing its prime directive to create as many of the small metal fasteners as possible, exterminates the human race. He ignores the many criticisms of that scenario, such as the fact that no one designs products to have a single goal and then releases them untested. (Self-driving cars don't shoot off in a straight line at maximum speed.) He convinces himself that technology is likely to destroy us and that our salvation lies in listening to wise men like him and imposing government regulations.
Harari is especially bad on the subject of free expression. He worries about problems "created by information" and "made worse by more information." He cautions that "Free conversation must not slip into anarchy." He laments the ease with which average people can now circumvent "gatekeeper" institutions, such as the legacy media, and "join the debate." "Manipulative bots" will "build friendships" with these rubes, he fears, and "influence" their fragile psyches. As AI advances, he warns, liberal democracies might lose the ability to "combine free debates with institutional trust."
Yet for all this scornful rhetoric, Harari's proposals for reform in this area, as in others, are strangely muted. He urges social media platforms to do more to arbitrate truth (never mind how poorly past such efforts have gone), and he recommends banning bots that pretend to be human. If democracy is drowning in information, Harari is not about to save us.
Perhaps Harari can't imagine plausible solutions to his fantastical scenarios. Perhaps he can't stomach setting forth bold but highly illiberal schemes. Or perhaps he understands, deep down, that the world needn't take drastic action in response to hundreds of pages of half-baked guesswork. "I've just told a story," Harari says at one point in Nexus. "These are all wild speculations," he says at another. Here and there, glimmers of self-awareness.
The post The Fantastical Scenarios of Yuval Noah Harari: From the Roman Past to the AI Future appeared first on Reason.com.