Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

Google's misinformation vaccine

(Credit: Finn Winkler/picture alliance via Getty Images)

Interesting news out today from Google, which is expanding its fledgling “prebunking” program—for combating misinformation—into Germany after a trial run in Eastern Europe.

Prebunking is a concept that’s several years old, and it’s based on inoculation theory. Instead of merely trying to debunk misinformation that’s already gone viral, the idea is to give people tips on how to spot the sorts of misinformation that they are likely to encounter, so they’re primed to be less susceptible to it. Think of it as a psychological vaccine, in this case, administered via ads on platforms including Google’s own YouTube, as well as Facebook and TikTok—here’s an example from Poland, attempting to steer people away from the false but fast-moving narrative that Ukrainian refugees are being treated better than Polish citizens.

But wait, isn’t that just media literacy—teaching people how to be more critical consumers of information? Not quite, according to Jon Roozenbeek of the University of Cambridge, who was one of the academics who came up with the prebunking concept, and who has subsequently worked with Google’s Jigsaw division on developing tools to implement it.

For one thing, he told me in a conversation today, prebunking is more likely to reach people who are outside the educational system. It’s also designed to be more lighthearted. “The challenge is, how do you get people interested in how to identify harmful or misinforming content without feeling patronized or talked down to?” Roozenbeek told me. “Leveraging humor is a good way to do it.”

The approach is promising, Roozenbeek said—but he also warned that “We shouldn’t see this as a panacea.” 

“Individual-level interventions” like prebunking and media literacy programs need to be frequently repeated, otherwise people forget what to look for, whereas “system-level interventions”—like the EU’s incoming Digital Services Act, which will force Big Tech to tackle disinformation—have a much greater effect, he said. 

However, there’s a tradeoff, as those system-level interventions are also the ones that carry greater risk. Facebook could scrub its platform of all misinformation if it wanted to, Roozenbeek said, but in doing so it would also remove a lot of content that isn’t misinformation, raising big free-speech problems. “If you debunk someone, none of their rights are violated. If I show someone a YouTube ad that explains to them how a false dichotomy works, again, no harm done,” he noted.

All this is cutting-edge stuff, concocted to confront an urgent, visible problem. But what happens when automatically-generated misinformation becomes more prevalent? Roozenbeek believes there is potential for bots to become less easily identifiable as such, as they become integrated with generative A.I. systems like OpenAI’s ChatGPT—and if and when that happens, it will again require new solutions.

“I don’t think prebunking or media literacy help all that much [in such a scenario], because there has to be a particular skill you want people to pick up on, in order for that to be effective,” Roozenbeek said. “You can’t do that for content that has no markers you can pick up on.”

Want to send thoughts or suggestions to Data Sheet? Drop me a line here.

David Meyer

Data Sheet’s daily news section was written and curated by Andrea Guzman. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.