Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Foreign Policy
Foreign Policy
Comment
Sumit Ganguly

Political Science Has Its Own Lab Leaks

Journalists and guests look on as a screen displays a part of the commissioned video installation "I Saw The World End" by Artists Es Devlin and Machiko Weston at the Imperial War Museum in London on Aug. 6, 2020. Leon Neal/Getty Images

The idea of a lab leak has gone, well, viral. As a political scientist, I cannot assess whether the evidence shows that COVID-19 emerged naturally or from laboratory procedures (although many experts strenuously disagree). Yet as a political scientist, I do think that my discipline can learn something from thinking seriously about our own “lab leaks” and the damage they could cause.

A political science lab leak might seem as much of a punchline as the concept of a mad social scientist. Nevertheless, the notion that scholarly ideas and findings can escape the nuanced, cautious world of the academic seminar and transform into new forms, even becoming threats, becomes more of a compelling metaphor if you think of academics as professional crafters of ideas intended to survive in a hostile environment. Given the importance of what we study, from nuclear war to international economics to democratization and genocide, the escape of a faulty idea could have—and has had—dangerous consequences for the world.

Academic settings provide an evolutionarily challenging environment in which ideas adapt to survive. The process of developing and testing academic theories provides metaphorical gain-of-function accelerations of these dynamics. To survive peer review, an idea has to be extremely lucky or, more likely, crafted to evade the antibodies of academia (reviewers’ objections). By that point, an idea is either so clunky it cannot survive on its own—or it is optimized to thrive in a less hostile environment.

Think tanks and magazines like the Atlantic (or Foreign Policy) serve as metaphorical wet markets where wild ideas are introduced into new and vulnerable populations. Although some authors lament a putative decline of social science’s influence, the spread of formerly academic ideas like intersectionality and the use of quantitative social science to reshape electioneering suggest that ideas not only move from the academy but can flourish once transplanted. This is hardly new: Terms from disciplines including psychoanalysis (“ego”), evolution (“survival of the fittest”), and economics (the “free market” and Marxism both) have escaped from the confines of academic work before.

The “clash of civilizations” hypothesis is a good candidate for one of the more disruptive lab leaks in political science’s history. When the Harvard University scholar Samuel P. Huntington released his article “The Clash of Civilizations?” (note the question mark, which disappeared in later versions) in Foreign Affairs in 1993, he spread a bold and simple hypothesis about the course of the post-Cold War world: “The great divisions among humankind and the dominating source of conflict will be cultural. … The clash of civilizations will dominate global politics. The fault lines between civilizations will be the battle lines of the future.”

Huntington’s thesis was not a conjecture based on careful empirical study—it was a speculation looking forward based on some cherry-picked contemporaneous examples. Many academic articles that sought to rebut Huntington by testing his hypothesis fell into this trap, attempting to show him wrong with sometimes quite impressive tests. But Huntington could not be disproved by mere facts. His idea was primed to thrive in the wild, free from the confines of empirical reality.

Facts, indeed, often appeared secondary to Huntington’s larger political project. In his follow-up book on the subject, The Clash of Civilizations and the Remaking of World Order, he illustrated his argument by sketching what he considered a plausible scenario: a Sino-U.S. conflict over Vietnam leading to a racialized third world war that ends with the destruction of Europe and the United States while India attempts to “reshape the world along Hindu lines.”

This writing led not to Huntington being ostracized but enhanced his reputation, especially after the 9/11 terrorist attacks made his claim that “Islam has bloody borders” seem plausible to mainstream audiences. As late as 2011, the New York Times columnist David Brooks praised Huntington as “one of America’s greatest political scientists”—and even though that column ultimately judged Huntington as having gotten the “clash” hypothesis wrong, it did so with kid gloves: “I write all this not to denigrate the great Huntington. He may still be proved right.”

Another contender is the idea of managing great-power competition through game theory. During the 1950s and 1960s, political scientists and their counterparts in economics and elsewhere sought to understand the Cold War by using then-novel tools of game theory to model relations between the United States and the Soviet Union. In their earliest forms, these attempts reduced the negotiations and confrontations between the two sides to simple matrices of outcomes and strategies with names like the Prisoner’s Dilemma, Chicken, and the Stag Hunt.

The allure was obvious. Make some simplifying assumptions about what the players in these games want; specify the strategies they can employ to achieve them; assume that players know what the other players know; and calculate that they will choose their strategy based on the choice the other player will make to maximize their well-being. Voilà—a science of strategy.

It is easy to mock this approach—too easy, in fact. These simple assumptions perform pretty well within their theoretical boundaries. Every semester (when the world isn’t in a pandemic), I use in-person simulations of these basic games with my undergraduate students to show that changing the rules of the game can influence players’ willingness to cooperate, a finding well attested in generations of scholarly tests.

Yet there’s a huge leap in jumping from these general, aggregate findings to believing that such simple ideas can guide the behavior of complex states without an incredible amount of additional refinement. In international relations, the specific strategies that can be employed are vast (and new ones can be invented), the stakes of every contest are unknowable, actors have incentives to hide what they know from others, and, perhaps most important, players interact again and again and again. Even when playing the Prisoner’s Dilemma, a game concocted to make cooperation a fool’s strategy, simply changing from playing a game once to playing it repeatedly can make cooperation an equilibrium.

Nevertheless, the general tendency of a certain influential sect of social science was to embrace the idea that game theory (to be fair, in somewhat more sophisticated terms) could provide not only insights into general features of world affairs but specific foreign-policy recommendations to guide the United States through the Cold War. In influential books like The Strategy of Conflict and Arms and Influence, the game theorist Thomas Schelling used those tools to make the Cold War seem easy to manage—an interaction in which cool head, logic, and a steely command of risk could make confrontations from the Taiwan Strait to the Berlin Wall explicable and winnable.

All of this would have been harmless if these ideas had stayed inside the lab. But these approaches soon jumped from the confines of Harvard and the Rand Corp. to the White House and the policy community. The Kennedy administration was a wonk’s playground, and the Pentagon under Defense Secretary Robert McNamara became a superspreading event for rationalist ideas. President John F. Kennedy and his staff relied heavily on advice from Schelling. Schelling’s influence even extended to running war games with top policymakers at Camp David.

Theories are only as sound as their assumptions. The Cold War was never as stable or simple as Schelling advertised. Far from the world of perfect knowledge and well-calibrated risk that Schelling envisioned, errors and misperceptions abounded, not least during the Cuban missile crisis, which was even more dangerous than it appeared at the time. Organizations in charge of nuclear weapons suffered numerous near-catastrophic accidents, and the U.S. government even underestimated the potential effects of a nuclear war. Even in Schelling’s war games, policymakers proved far more reluctant to escalate tensions than his theories suggested they should have been.

The leaders of the superpowers were frail and fallible, not superhuman risk managers. During a Soviet-U.S. standoff over the Middle East in 1973, according to the historian Sergey Radchenko, Soviet leader Leonid Brezhnev was addled by his addiction to sleeping pills. Avoiding a nuclear war required his subordinates to handle the crisis—even as their counterparts in Washington did the same with Richard Nixon, who was himself very possibly drunk during the same crisis.

As a group of historians document in the book How Reason Almost Lost Its Mind, the dominance of rationalist theories during the 1950s and 1960s impoverished the advice available to policymakers. The hegemony of such theories also led the field astray, both crowding out alternatives and degenerating into recondite academic parlor games rather than a more vigorous, diverse research tradition.

The biggest problem, however, was that relying on such theories as guidance to understanding confrontation in the nuclear age meant relying on a faulty map while navigating through treacherous waters. It is far from inconceivable that we were only lucky that such prescriptions did not set leaders on a course over the brink. Today, the legacy of game theory in the popular discourse lives on in merely bloated Twitter threads—a shame because contemporary formal theory has far more to offer than the Cold War-era variety.

Both of these ideas represent dangerous concepts with faulty prescriptions that nevertheless reached immense and policy-relevant audiences. Yet neither is the most important political science lab leak—by a strict definition. Although Huntington was a political scientist, he explicitly disclaimed that his “clash” theory should be treated as social science. And although game-theoretic approaches had a huge effect on the study of international relations and foreign policy in the 1950s and 1960s, it was an interdisciplinary movement even more closely associated with economics than political science.

The most dangerous lab leak from political science is probably the idea of the democratic peace. Heralded decades ago as the closest thing to an empirical law in international relations, and with a pedigree allegedly stretching back to Immanuel Kant, the democratic peace theory holds that democracies are less likely to go to war with each other. (The newest entry in this literature suggests that the causal relationship between democracy and peace is “at least five times as robust as that between smoking and lung cancer.”)

A long debate within political science concerns why this correlation might hold. International relations graduate students studying for comprehensive exams have to keep straight numerous subdebates: whether the causes of the peace stem from the incentives of democracy for leaders or the deep normative underpinnings of liberalism; whether the real cause is capitalism and prospects for trade instead; whether political scientists have cooked the books by redefining U.S. adversaries as nondemocratic even when they have had representative governments; and how methods and measurements confirm or complicate the story.

Much of this nuance drops away when we teach this material to introductory courses, the largest audiences we command. Surprisingly, as the Israeli scholar Piki Ish-Shalom argues in Democratic Peace: A Political Biography, even more nuance drops away when the idea reaches policymakers.

Ish-Shalom demonstrates that the democratic peace became firmly entrenched in U.S. policymakers’ minds by 1992, when Bill Clinton used it as part of a bid to woo neoconservatives in that year’s elections and Republican Secretary of State James Baker seized on it as a doctrine to underpin post-Cold War foreign policy.

As the democratic peace concept raced away from serious and conflicting academic debates, it simplified and evolved. In his 1994 State of the Union address, Clinton declared that “democracies don’t attack each other”—the bluntest possible summary. By 1997, British and Israeli policymakers used the democratic peace concept as a way of justifying NATO expansion and denying Egypt’s right to criticize Israeli nuclear arms. Observing that trend, the scholar Gary Bass warned in the New York Times that the idea “should not become an excuse for belligerence.”

Bass’s warnings proved prophetic. In a new, more transmissible form, the democratic peace became part of the justification for the 2003 invasion of Iraq. A new variant emerged in neoconservative circles: If democratization yielded a more peaceful world, then it naturally followed that promoting democracy was a means to democratization. For muscular conservatives of the Bush administration, the implication was obvious: The Middle East thus needed to be forcibly democratized. Secretary of State Condoleezza Rice—who holds a Ph.D. in political science—argued that the democratic peace, and even forcible democracy promotion, was thus the “only realistic response to our present challenges.”

Anyone who has studied the causes of historical events knows that singling out a single cause for a complex event is a mug’s game. Some, such as the democratic peace theorist Bruce Russett, have argued that the democratic peace theory was more a retrospective justification for the Iraq War than a cause—and, anyway, that the precise circumstances that his version of the theory required were not satisfied.

Such arguments may salvage the academic merit of the theory, but they do not prove that the concept played no role. As Ish-Shalom writes, no academic theory guides policy in its purest form. What moves policy are the “distorted configurations of theories: theories as the public conceives them.”

By the early 2000s, elite Western opinion was settled: Academic research proved a relationship between more democracy and less war. The debates about the mechanisms by which democracies produced peace had been forgotten, since they were less catchy and less usable. The democratic peace, carefully nurtured and tested by academic researchers, had escaped into the real world and mutated, with disastrous consequences.

Any serious discussion of lab leaks, whether the viral or the “viral” kind, has to appreciate the trade-offs that come with playing with dangerous ideas. Research progresses best under minimal external constraints, but actual policy requires responsibility and prudence. Striking the right balance between vibrant academic exploration and staid policymaking requires the intellectual equivalent of vaccinations: building up intellectual antibodies in the political and policy worlds that can help officials and journalists maintain their skepticism against the simple, enticing, and wrong ideas that seem to explain—or fix—the world.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.