[This month, I'm serializing my 2003 Harvard Law Review article, The Mechanisms of the Slippery Slope.]
"[T]he assault weapons ban is a symbolic—purely symbolic—move in [the] direction [of disarming the citizenry]," wrote columnist Charles Krauthammer, a proponent of a total gun ban. "Its only real justification is not to reduce crime but to desensitize the public to the regulation of weapons in preparation for their ultimate confiscation …. De-escalation begins with a change in mentality …. The real steps, like the banning of handguns, will never occur unless this one is taken first …."
This is a claim about slippery slopes, though made by someone who would welcome the slippage. Decision A (an assault weapon ban) will eventually lead to B (total confiscation of weapons) because A and similar decisions will slowly change the public's mind about gun ownership—"desensitize" people in preparation for a future step. (Note how this mechanism differs from the multi-peaked preferences slippery slope, which does not rely on people's underlying attitudes' being shifted.)
But how does this metaphorical "desensitization" actually work? Why don't people simply accept decisions A, B, C, and so on until they reach the level they've wanted all along, and then say "Stop"? Why would voters let government decisions "change [their] mentality" this way?
Let me start with slippery slopes where a legislative or judicial decision increases the likelihood of a future legislative (not judicial) decision. Here, I think the slippery slope may be driven by what I call the Is-Ought Heuristic, and by what others have called "the normative power of the actual."
In the wake of the September 11 attacks, Congress was considering the USA Patriot Act, which, among other things, may let the government track—without a warrant or probable cause—which e-mail addresses someone corresponded with, which Web hosts he visited, and which particular pages he visited on those hosts. Let's call this "Internet tracking," and let's assume for now that this power is undesirable. This is our result B. Twenty-two years earlier, in Smith v. Maryland, the Supreme Court approved similar tracking of the telephone numbers that a person had dialed (the so-called "pen register"). This was decision A.
Curiously, most arguments on both sides of the Internet tracking debate assumed A was correct, even though a precedent holding that similar legislation was not unconstitutional might have at first seemed of little relevance in a debate about whether the new legislation was proper. The new proposals, one side argued, are just cyberspace analogs of pen registers and are therefore good. No, the other side said, some aspects of the proposals (for instance, the tracking of the particular Web pages that a person visited) are unlike pen registers—they are analogous not just to tracking whom the person was talking to, but to tracking what subjects they were discussing. Few people argued that Smith was itself wrong and that the bad precedent shouldn't be extended. The "normative power of the actual" was operating here—people accepted that pen registers were proper because they were legal.
Why did people take the propriety of pen registers for granted? Why didn't people ask themselves what they, not courts, thought of such devices, both for phone calls and for Internet access? Why didn't they consider the propriety of B directly, rather than being swayed by decision A, the legal system's possibly incorrect acceptance of pen registers?
Perhaps these people fell into the is-ought fallacy; they erroneously assumed that just because the law allows some government action (pen registers), actions of that sort must be proper. If this error is common, then one might generally worry that the government's implementing decision A will indeed lead people to fallaciously assume that A is right, which will then make it easier to implement B.
This worry doesn't by itself justify disapproving of A, since people's acceptance of the propriety of A will trouble you only if you already think A is wrong. But it might substantially intensify your opposition to A; even if you think A is only slightly wrong on its own, you might worry that its acceptance by the public could foster many worse B's.
But there may be more involved here than just people's tendency to succumb to fallacies. Sometimes, people may reasonably consider a law's existence (is) to be evidence that the law's underlying assumptions are right (ought).
Consider another example: you ask someone whether peyote is dangerous. It would be rational for the person's answer to turn partly on his knowledge that peyote is illegal. "I'm not an expert on drugs," the person might reason, "and it's rational for me not to develop this expertise; I have too many other things occupying my time. But Congress probably consulted many experts and concluded that peyote should be banned, presumably because it thought peyote was dangerous."
"I don't trust Congress to always be right, but I think it's right most of the time. Thus, I can assume that it was probably right here, and that peyote is indeed dangerous." Given the person's rational ignorance, it makes sense for him to let the state of the law influence his factual judgment about the world.
The same approach may also apply to less empirical judgments. The proper scope of police searches, for instance, is a complex issue. Most people lack well-developed, comprehensive philosophies on the subject that would give them clear answers to most police search questions. So instead of thinking deeply through the matter themselves, they may choose to defer to the Court's expert judgment, if they think that the Justices are usually (even if not always) right on such questions.
We might think of this as the is-ought heuristic, the non-fallacious counterpart of the is-ought fallacy. Because people lack the time and ability to figure out what's right or wrong entirely on their own, they use legal rules as one input into their judgments. As the literature about the expressive effect of law suggests, "law affects behavior … by what it says rather than by what it does." One form of behavior that law A can affect is voters' willingness to support law B.
The is-ought heuristic might also be strengthened by the desire of most (though not all) people to assume that the legal system is fundamentally fair, even if sometimes flawed. Those people may thus want to trust that legislative and judicial decisions are basically sound, and should be relied on when deciding which future decisions should be supported.
The is-ought heuristic may in turn reinforce the persistence heuristic mentioned in the discussion of enforcement need slippery slopes (section II.C). Once society adopts some prohibition A—for example, on unauthorized immigration, drugs, or guns—and the prohibition ends up being often flouted, the persistence heuristic leads people to support further steps (B) that would more strongly enforce this prohibition. The is-ought heuristic leads people to support B still further, because the very enactment of A makes its underlying moral or pragmatic principle (that unauthorized immigration, drugs, or guns ought to be banned) more persuasive.
When we think about attitude-altering slippery slopes this way, some conjectures (unproven, but I think plausible) come to mind. All of them rest on the premise that the is-ought heuristic flows from people thinking that they lack enough information about what's right, and therefore using the current state of the law to fill this information gap:
[1.] We should expect attitude-altering slippery slopes to be more likely when many people—or at least a swing group—don't already feel strongly about the topic.
[2.] We should expect attitude-altering slippery slopes to be more likely when many voters are pragmatists rather than ideologues. If the population were a mix of, say, devout Marxists, objectivists, and Christian fundamentalists—people who have firm underlying belief systems that purport to resolve most moral and even empirical issues—then few people would look to the government's actions for guidance, since most people would already have strong judgments of their own.
But for people who think that many problems can't be answered by a grand theory, and instead require pragmatic weighing of many factors, the judgment of the government may well be one of the factors that they consider. The Burkean, who believes that each person's "own private stock of reason … is small, and that the individuals would do better to avail themselves of the general bank and capital of nations and of ages," is more likely to be influenced by the judgments of authoritative social institutions—judgments that help compose "the general bank and capital" of people's knowledge—than someone who has a more deductive ideology.
[3.] We should expect attitude-altering slippery slopes to be more likely in those areas where the legal system is generally trusted by much of the public. For instance, the more the public views certain kinds of legislation as special-interest deals, the less attitude-altering effect the legislation will have.
[4.] We should expect attitude-altering slippery slopes to be more likely in areas that are viewed as complex, or as calling for expert factual or moral judgment. The more complicated a question seems, the more likely it is that voters will assume that they can't figure it out themselves and should therefore defer to the expert judgment of authoritative institutions, such as legislatures or courts. Thus, replacing a simple political principle or legal rule with a more complex one can facilitate future attitude-altering slippery slopes.
The post Attitude-Altering Slippery Slopes appeared first on Reason.com.