Renée DiResta is a writer and researcher into online manipulation. In 2018, she led a US Senate investigation into the activities of the Russian Internet Research Agency and in 2019 she joined the Stanford Internet Observatory – a non-partisan project to analyse online disinformation. In June this year, after a Republican-led investigation, her contract, along with those of many other staffers, was not renewed, prompting some observers to claim the group was being dismantled due to political pressure.
What inspired you to write about what you call the “propaganda machine”?
I started to feel that propaganda had fundamentally changed. The types of actors who could create it and spread it had shifted, and the impact it was having on our society was quite significant, but we weren’t using the word. We were using words like “misinformation” or “disinformation”, which seemed to be misdiagnoses of the problem. And so I wanted to write a book that asked, in this media ecosystem, what does propaganda look like?
And what did you conclude?
A propagandist is an individual or entity that very deliberately and systematically uses things like framing or slight manipulation of information to promote a worldview, or push out a particular type of agenda.
That role can be picked up by anybody at this point. We all have the reach of mass media, networked distribution, communities of people who are very, very passionate at getting their messages out. Oftentimes, that’s just called “activism”, but then there are also times when you do see manipulative tactics begin to come into play: when you see the use of automation, the use of AI, efforts to obscure the origin of messages, state actors coming in to pour gasoline on existing fires.
The information environment now is radically different from even a decade ago. What are the most important changes?
It’s directly participatory. For a long time, we thought of propaganda as something that was said to the public, and now we have a model where the public can actually participate quite directly in amplifying the messages that they want out there in the world.
What you also see is the rise of the figure of the influencer. Influencers didn’t exist in prior media environments: they’re positioning themselves much more as, “I am a person who is just like you. Here are my opinions. I’m going to share them.” And, oftentimes, what you’re not seeing is that there are intersections with political campaigns.
How does algorithmic curation play into this?
There is a triad of influencer, algorithm and crowd. The influencer has to produce content that the algorithm wants to serve. This is something that’s very important for people to understand: just because you follow somebody on social media, doesn’t mean that you see all of their posts.
There’s a process of algorithmic curation that happens: an algorithmic feed ranking; and algorithmic content moderation. The algorithm might decide that a certain keyword is being downranked or throttled for some reason, and the influencer has to be aware of that, or they’re not going to get that post seen by very many people. So you see the influencers producing content for their audience, but also for the algorithm.
Does old media still matter?
Old media increasingly covers what’s happening online, giving a way for you, too, to be aware of that controversy and conversation.
We also see this phenomenon of “trading up the chain”. You’ll see a rumour begin to appear in an online ecosystem. The partisan media covers it credulously, treating it as if people need to take it seriously.
And once one of them picks it up and reports on it, what you then see is the next outlet can cite that outlet and it will just continue to move up the chain – until, all of a sudden, an entire partisan media ecosystem is talking about that same topic.
It moves it into the media ecosystem. They’re not separate at all. It’s just a matter of how they interact and when.
A lot of what gets called misinformation begins as jokes that escape containment. Can you have purely accidental propaganda?
We used to try to differentiate between the two things you’re talking about: misinformation versus disinformation. The differentiator was intent. Let’s say Russian accounts put out content: that is clearly disinformation, they are doing it quite deliberately. But then your unwitting grandmother picks it up and shares it and she just happens to truly believe it. Is she sharing disinformation?
The reason I like the term “propaganda” is that that spectrum has been built into it and understood since day one. There’s always been a sense that propaganda is information with an agenda that serves the interests of the creator. The question of who knows what and when is less important than the understanding of this communication as a particular type of information in service to a particular agenda.
And then just as we start to understand and conceptualise all of this, along comes generative AI.
What happens with AI is that it takes the cost of creation to zero, and this means anybody can create compelling images, video and – in my opinion, most importantly – text.
But what has to happen is it still has to get distributed somehow. A lot of the accounts that we see are pumping out this stuff, but they’re not getting any pickup.
They exist. It’s important to note that, and to understand what that looks like and what that has the potential to do, but we’re not seeing that these accounts really have an impact in the conversation, particularly in the text space.
What can we do about this?
I wrote [Invisible Rulers] in part to explain how the system worked. If you show people how a magic trick works, they will remember it for ever. I think that that is a much more effective way to engage with propaganda and rumours, to say: “This is what it looks like, here’s how it works, here’s how it spreads.”
I found Noam Chomsky’s book Manufacturing Consent very impactful as a reader, to understand how the incentives of mass media shaped outputs. The point of his book wasn’t that mass media is terrible and we should never read it again; it was that we should be informed about how it works, so that we can be informed consumers, and I think that we can do that in this media environment as well.
It seems that, in the recent UK election, a lot of these fears never came to pass. The campaign was relatively normal – or at least as normal as you can get when the governing party is melting down. Do you hold out any hope that we might get the same story in the US?
It would be wonderful if that happened. When you are articulating threats and saying, “This is the worst case scenario”, you don’t want to be proven right! You want to say: “Here’s what you should be aware of, here’s what could go wrong, be prepared and let’s all rejoice if it doesn’t come to pass.”
But you don’t think that’s likely.
I think the US is, unfortunately, a special case, in large part because of what happened on 6 January [2021], and the deep, sustained belief in conspiracy theories that have come to shape our politics in extremely mainstream ways.
My concern is that people will think that the end justifies the means and will be willing to use manipulative tactics because the stakes are seen as existentially high.
How did you end up being branded “CIA Renée” and what does it demonstrate about the conspiracy theorists you’ve written about?
In late 2022, our work studying the Big Lie in election 2020 was reframed as a vast conspiracy theory by a man who worked at the US state department for a couple of months at the very end of the Trump administration. Although he had no inside knowledge of anything we’d done, he leveraged these credentials to establish himself as an authoritative voice on “censorship” and the “deep state” – and he went on about the CIA constantly. He set out to tar not only our work, but us personally. Attacking the messenger is a fairly established smear tactic. In my case, I’d interned for the CIA decades ago, as an undergraduate in college. That grain of truth was leveraged to insinuate that I am somehow still secretly affiliated with the CIA. Other bloggers who began to write about and monetise the conspiracy theory about our work underpinning a vast “censorship” cabal picked the insinuation up, and wrote posts about my supposed “rise to the highest levels of the US intelligence community” – pure, unadulterated nonsense but credulous audiences ate it up. And so, the legend of CIA Renée was born.
The Stanford Internet Observatory did valuable work calling out propaganda, yet it ended up being drawn into the partisan political battleground and dismantled. What lessons do you draw from this tale?
Institutions are ill-equipped both to recognise partisan hatchet jobs and to know how to effectively respond to them. Those of us who study propaganda and rumours were quite clear-eyed about what was happening from the moment the first congressional inquiry arrived. We understood what the goals of this supposed investigation were, and how it would progress into leaks, lies and harassment through a process of laundering claims through aligned media and influencers. This “oversight inquiry” was intended to feed a sustained propaganda campaign that sought to undermine the idea that studying (or mitigating) viral rumours and disinformation campaigns is a worthwhile thing to do. Countering such a campaign requires communicating. The problem is that communicating about the attacks calls attention to them, which is counter to established institutional thinking about how to handle a crisis. Institutions need new playbooks.
• Invisible Rulers: The People Who Turn Lies into Reality by Renée DiResta is published by PublicAffairs US (£25). To support the Guardian and Observer, order your copy at guardianbookshop.com. Delivery charges may apply