Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Foreign Affairs
Foreign Affairs
Comment
Cynthia Miller-Idriss

America’s Epidemic of Hate

The murder of seven parade attendees on July 4 in Highland Park, Illinois, was horrifying—and all too predictable. Over the past decade, there have been scores of mass killings across the United States. In 2012, a shooter murdered seven Sikh worshipers in Oak Creek, Wisconsin. In 2015, an assailant massacred nine Black people at a church in Charleston, South Carolina. An attacker killed 23 Walmart shoppers in 2019 in El Paso, Texas. And on May 24 of this year, a shooter killed 21 people in Uvalde, Texas. 

These episodes have no single formula, but recent assailants typically share a similar set of toxic traits: a history of murderous fantasies, violent nihilism, self-harm, or suicidal ideation; withdrawal from friends and family; and streaks of cruelty expressed through the torture or killing of animals, the stalking or harassing of women, or threats of rape and other physical harm. Investigators are still learning about the Highland Park assailant and his motive, but they know that the 21-year-old alleged shooter had threatened to kill his family and commit suicide. Similarly, the man charged with murdering ten Black Americans in June 2022 at a grocery store in Buffalo, New York, had threatened to commit a murder-suicide and had written about stabbing and beheading a cat. 

Many attackers, including the one in Buffalo, are domestic terrorists who specifically aim to kill minorities. But whether the assailants are ideological or not, they share another common feature: exposure to violent online content. The alleged Highland Park shooter spent time in toxic digital forums that glorified massacres, and he repeatedly posted violent imagery and videos online, including content suggesting he had “a plan and a desire to commit carnage.” The alleged Buffalo assailant was radicalized by white supremacist content he discovered on the Internet during the pandemic while he was, as he put it, “bored.” He was especially taken by material that promoted a conspiracy theory known as the “great replacement,” which falsely describes an orchestrated effort by Jews or Muslims to replace white, Christian civilization through immigration and demographic changes. Other recent acts of terrorism against Jews, Muslims, Latinos, and now Black people have also been motivated by this conspiracy theory. These attacks are then celebrated in online white supremacist communities as heroic acts of martyrdom, fueling more violence. My research lab has listed talk of the great replacement theory as the first warning sign for parents and caregivers that their children are being radicalized.

At this point, government officials, bystanders, and private companies have worked to find potential assailants based on the threatening online content they post and consume. But better reporting and response tools are Band-Aid solutions aimed at intervening in cases already egregious enough to warrant assessment. To truly prevent domestic terrorism and mass attacks, U.S. policymakers must focus not just on stopping dangerous individuals but on counteracting the online worlds they inhabit. Instead of concentrating its response at the end point of radicalization—the time right before an attack—the government should work to stop individuals from engaging with online hate and harm to begin with. 

HATE HAS A HOME

In the digital world, hate is commonplace. People see violent material and disinformation not just on sketchy Web forums but on popular platforms such as TikTok, Roblox, Steam, Twitch, DLive, and Discord. (The latter four platforms are widely used by online gamers.) These places also serve as conduits for harassment. Users of Discord, for instance, are known to engage in “raiding,” in which members of marginalized groups are spammed with hostile content. Such harassment further compounds hatred. Research shows that people who engage in harmful online behavior such as trolling, harassment, or doxxing are more likely to be persuaded by white supremacist views. They may then join the ranks of the millions of white, overwhelmingly male Americans who lurk on supremacist platforms: websites that degrade marginalized groups and promote violence. Some of these sites refer to successful terrorist actors as “saints” and potential emulators as “disciples.” They can feature memes with scoreboard-style “kill counts.”

Although it is tempting to view these forums as—almost exclusively—places where the already extreme mingle, plot, and grow more radicalized, people who do not espouse violence also sometimes find themselves on platforms that traffic in dehumanizing content. They, too, can then become desensitized to the material. Some will follow hyperlink after hyperlink into an ever-deeper rabbit hole of us-versus-them thinking that calls for violence. 


Some white supremacist sites refer to successful terrorist actors as “saints.”


At their most extreme, these sites feature posters who vie to upload increasingly violent and offensive imagery—including child pornography and images of torture or mass harm—to prove their resolve. But dangerous content doesn’t need to be explicit. The Internet is awash in material that glorifies violence by embedding it in coded speech, memes, or cartoons that use satire and irony to maintain plausible deniability, allowing peddlers to brush off critics as “snowflakes” who can’t take a joke. 

Young people are especially vulnerable to this kind of programming. American teenagers spend more than half their waking hours online—averaging nine hours per day of screen time—and 64 percent of them report encountering hateful content on social media. Adults, who spend less time online, often fail to recognize just how widespread and normalized online toxicity is. Young people, meanwhile, often regard it as normal. Before he killed 19 students and two teachers at an elementary school in Uvalde, the alleged assailant regularly threatened teen girls online with kidnapping, rape, and murder. He spent time in toxic online subcultures where he posted images of dead cats, and he threatened to shoot his grandmother and children at school. Although plenty of youth observed the alleged shooter’s digital rage, many dismissed it as simply a product of, as one teenager put it, “what online is.” One teen did report the alleged assailant to the social media app Yubo, but he said that the app failed to respond. 

Unfortunately, the United States doesn’t do much better at monitoring and catching dangerous offline behavior. The alleged Buffalo shooter was referred for a psychiatric evaluation after telling his high school classmates he wanted to carry out mass violence, but he was released. When he graduated from high school two weeks later, investigators stopped paying attention to his case. Within a year, he had targeted and murdered Black Americans.

EXTREMIST IMMUNITY

U.S. policymakers are aware that existing approaches have not succeeded in stopping domestic extremism, and they are working to catch up. In June 2021, the Biden administration issued the country’s first-ever national strategy for countering domestic terrorism. The Department of Homeland Security reorganized its terrorism prevention efforts, including by launching regional offices across the country. And within a week of the Buffalo shooting, New York Governor Kathy Hochul ordered a comprehensive review of the state’s procedures for addressing violent extremism and laid out a pathway for reform. (Many policymakers have also proposed stricter gun laws, which have almost eliminated mass shootings in Australia and the United Kingdom, but these effective measures have virtually no chance of being enacted in the United States.)

The scramble to better respond to domestic extremism has resulted in useful new initiatives and tools. The U.S. government and its state and local counterparts have better reporting systems than they did just a few years ago. They also have new law enforcement training on how to recognize warning signs, as well as bystander intervention and crisis training that can reduce the lethality of violent attacks. Hochul has established a state-level unit dedicated to stopping domestic terrorism. Yet these measures, although all important, are incomplete. To truly curb terrorism and other episodes of mass violence, policymakers need to tackle hateful digital content itself. 

This might seem like a tall order, given the United States’ strong free speech protections. But limiting the reach of extremist content doesn’t require censorship. Decades of public health research shows that people can develop resistance to even subtle attempts at persuasion through what is called “attitudinal inoculation,” which helps people build mental resistance to dangerous messaging by showing them a small amount of harmful content along with an explanation of how it seeks to influence their views. Attitudinal inoculation works because people don’t like to find out they are being manipulated. As a result, when individuals learn that someone is trying to sway them, they typically reject that person’s advances.

Over the past two years, my research lab has found that this approach can be successfully applied to online disinformation. In partnership with Jigsaw, a unit of Google focused on improving Internet safety, our team showed participants online videos that prebunked various conspiracy theories and propaganda. In our study, the participants who watched an inoculation video about white supremacist propaganda were significantly less likely to support the messaging than were the noninoculated participants. Watchers also found the source of the propaganda less credible, and they were less willing to offer financial, ideological, or logistical support to violent extremists. Similarly, people who watched inoculation videos about anti-vaccine propaganda were less likely to support sharing vaccine misinformation and disinformation than were the control groups. They were also more willing to get a COVID-19 shot. This process may seem time consuming, but it isn’t. Our studies show that inoculation videos as short as 30 seconds—the length of a commercial advertisement—are effective at countering online propaganda. 

TEACHING TOLERANCE

Yet as useful as it is, attitudinal inoculation cannot by itself stop radicalization. It is still not clear, for example, how long the resistance it confers lasts or whether inoculation against one form of propaganda might confer some resistance against another. Inoculation, then, is just one part of what needs to be a comprehensive effort to address online harms. States must also work to stop violent racism and misogyny from being normalized by teaching about it and other forms of exclusion and inequality so that people know to challenge dehumanizing content or jokes before they encounter them. When children with questions about race and gender do not have trusted sources of information, they are more vulnerable to false explanations offered by conspiracy theories and online propaganda. 

To teach about issues surrounding race and gender, schools will need to actively educate students about the roots of hatred and discrimination whenever they can. This won’t be easy, given that teaching about racism has become deeply politicized. But there are ways to teach about hate and discrimination that can get around very troubling government efforts to banish race from classroom discussions. A group of college students in my lab, for example, recently won a Department of Homeland Security college competition aimed at stopping attacks by making an animated video about misinformation, paired with lesson plans for elementary school students. The video takes place in an elementary school full of anthropomorphized animals, and it shares the story of a new student, a duck named Daniel, who is excluded by the other animals because they encountered online misinformation about ducks being dumb. By not using humans, it teaches children to empathize with those who are different from them while not reinforcing harmful stereotypes.

Policymakers looking to prevent violent extremism and mass shootings should also expand their educational efforts in other ways. They need revamped public civic education that can strengthen commitments to inclusive democracy. They must work in collaboration with health and human services, the arts community, local parenting groups, and other organizations to cut back on public displays of hate. Communities need support to better recognize early warning signs and red flags of possible radicalization or violent mobilization, but they also need to know where to go to get more help or treatment options when they do see red flags—just as they would know how to do for any other problem, from eating disorders to sexual assault or addiction. 

Governments, businesses, and communities should also, of course, look for ways to prohibit the spread of hateful online content itself. Social media companies have a particular responsibility to block radicalizing information. But Americans are fooling themselves if they think the country can ban or arrest its way out of violence. It is impossible to filter out all of the world’s harmful online content, and intelligence authorities and first responders can’t thwart every violent attack. It is time, then, for communities to systematically equip people with the tools they need to recognize and reject propaganda and disinformation, be it through attitudinal inoculation or work with parents, teachers, mental health counselors, coaches, and anyone else in a leadership position. Otherwise, domestic violent extremism will continue to fester and metastasize, putting more lives at risk.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.