Officials in Ann Arbor, Mich., Union County, N.C., and Contra Costa County, Calif., are posting infographics on social media urging people to "think critically" about what they see and share about voting and to seek out reliable election information.
Earlier this month, the Federal Bureau of Investigation and the Cybersecurity and Infrastructure Security Agency put out a public service announcement saying cyberattacks are not likely to disrupt voting.
Twitter will soon roll out prompts in users' timelines reminding them final results may not come on Election Day.
They're all examples of a strategy known as "prebunking" that's become an important pillar of how tech companies, nonprofits and government agencies respond to misleading and false claims about elections, public health and other hot-button issues.
The idea: show people the tactics and tropes of misleading information before they encounter it in the wild — so they're better equipped to recognize and resist it.
Mental armor
The strategy stems from a field of social psychology research called inoculation theory.
"The idea [is] that you can build mental armor or mental defenses against something that's coming in the future and trying to manipulate you, if you learn a little bit about it," said Beth Goldberg, head of research and development at Jigsaw, a division within Google that develops technology to counter online threats. "So it's a little bit like getting physically inoculated against a disease."
To test inoculation theory, researchers have created games like Bad News, where players post conspiracy theories and false claims, with the goal of gaining followers and credibility. They learn to use techniques including impersonation, appeals to emotions like fear and anger, and amplification of partisan grievances. Researchers at the University of Cambridge found that after people played Bad News, they were less likely to think tweets using those same techniques were reliable.
In the past few years, those lessons are starting to be applied more broadly in campaigns encouraging critical thinking, pointing out manipulative tactics, and pre-emptively countering false narratives with accurate information.
Ahead of this year's midterm elections, the National Association of State Election Directors launched a toolkit for local officials with videos, infographics and tip sheets in English and Spanish. The overall message? Election officials are the most reliable source of election information.
Election officials on the front line
"Every day, people are hearing new rumors, new misconceptions or misunderstandings of the way elections are administered in their state," said Amy Cohen, NASED executive director. "And certainly local election officials are really on the front lines of this because they're right there in the community where voters are."
"Elections are safe and secure. We know because we run them," one graphic reads. "Elections are coming...so is inaccurate information. Questions? We have answers," says another.
A tip sheet local agencies can download and distribute offers ways to "protect yourself from false information about elections": check multiple news sources, understand the difference between fact-based reporting and opinion or commentary, consider the "purpose and agenda" behind messages, and "take a moment to pause and reflect before reacting."
Another focuses specifically on images and videos, noting they can be manipulated, altered, or taken out of context.
The goal is "addressing these patterns of disinformation rather than each individual story," said Michelle Ciulla Lipkin, executive director of the National Association for Media Literacy Education, which worked with NASED to develop the toolkit.
Other prebunking efforts attempt to anticipate false claims and provide accurate information to counter them.
Twitter has made prebunks a core element of its efforts to address misleading or false narratives about elections in the U.S. and Brazil, the U.N. climate summit in Glasgow last year and the war in Ukraine.
Many of these take the form of curated collections of tweets from journalists, fact checkers, government officials and other authoritative sources.
As part of its election prep work, the company identified themes and topics that could be "potential vectors for misinformation, disinformation or other harmful activity," said Yoel Roth, Twitter's head of safety and integrity.
Election prebunks have "provided critical context on issues such as electronic voting, mail-in balloting and the legitimacy of the 2020 presidential election," said Leo Stamillo, Twitter's global director of curation.
"It gives users the opportunity to take more informed decisions when they encounter misinformation on the platform or even outside the platform," Stamillo said
Twitter has produced more than a dozen prebunks about voting in states including Arizona, Georgia, Wisconsin and Pennsylvania.
It's also published 58 prebunks ahead of the midterms as well as the general election in Brazil, and has another 10 ready to go. That's a reflection of how misleading narratives cross borders, Stamillo said. "Some of the narratives that we see in the U.S., we've also seen in Brazil," he said.
Overall, 4.86 million users have read at least one of Twitter's election-related prebunks this year, the company said.
There is still a lot unknown about prebunking, including how long the effects last, what the most successful formats are, and whether it's more effective to focus on helping people spot tactics used to spread misleading content or to tackle false narratives directly.
Evidence of success
Prebunks focused on techniques or broader narratives rather than specific claims can avoid triggering partisan or emotional reactions, Google's Goldberg said. "People don't have preexisting biases, necessarily, about those things. And in fact, they can be a lot more universally appealing for people to reject."
But there's enough evidence supporting the use of prebunks that Twitter and Google are embracing the strategy.
Twitter surveyed users who saw prebunks during the 2020 election — specifically, messages in their timelines warning of misleading information about mail-in ballots and explaining why final results could be delayed. It found 39% reported they were more confident there would be no election fraud, 50% paused and questioned what they were seeing, and 40% sought out more information.
"This data shows us that there's a lot of promise and a lot of potential, not just in mitigating misinformation after it spreads, but in getting ahead of it to try to educate, share context, prompt critical thinking, and overall help people be savvier consumers of the information that they're seeing online," Roth said.
Over at Google, Goldberg and her team worked with academic psychologists on experiments using 90-second videos to explain common misinformation tactics including emotionally manipulative language and scapegoating. They found showing people the videos made them better at spotting the techniques — and less likely to say they would share posts that use them.
Now, Google is applying those findings in a social media campaign in Europe that aims to derail false narratives about refugees.
"It's now reached tens of millions of people, and its goal is to help preempt and help people become more resilient to this anti-migrant rhetoric and misleading information," Goldberg said. "I'm really eager to see how promising this is at scale."