Experts warn the task of distinguishing what's real from what's not will impose a significant mental and cognitive burden on people in the AI era.
Why it matters: Misinformation has already fueled significant social problems, ranging from polarization to vaccine skepticism. AI-generated content risks intensifying those issues and making it more difficult for people to make sense of the world around them.
Driving the news: Experts are raising alarms about the mental health risks and the emotional burden of navigating an information ecosystem driven by AI that's likely to feature even more misinformation, identity theft and fraud.
- Speaking to a group of global news executives last month, Karen Silverman, a member of the World Economic Forum's AI Global Council, warned that the role they will play in helping consumers figure out what's real will have massive consequences for human health and national security.
- The cadence at which AI will bring change to our daily lives, including our information habits, "is making everybody nervous and unbalanced," Silverman said. "That's its own security and mental health risk."
- "The advanced technologies were producing today put extreme pressure on distinguishing between data, information and knowledge," Silverman said.
- "How we pay for expertise, how we value expertise, and what we're willing to defer to the machine as opposed to reserve for humans is going to change. How we think about data information and knowledge will change," she added.
Misinformation can be a cognitive burden as well. It can have a lingering effect on our reasoning, even if it has been corrected. It can be used to craft false beliefs and memories that affect our behavior — for example, what foods we will eat — or how we remember the news.
- Doctored photos are "a nifty way to plant false memories" and "things are going to get even worse with deep fake technology," psychologist Elizabeth Loftus said at the Nobel Prize Summit last month that focused on misinformation.
- Faster photo manipulating tools — Google's Magic Editor, Adobe's Generative Fill feature and others — are blurring the lines between real and AI-generated memories, shaping what we will remember in the future, Wired's Lauren Goode writes.
State of play: AI-generated misinformation is already causing confusion.
- An AI-created fake image claiming to be an explosion at a building near the Pentagon — believed to be created with AI — spread rapidly on social media last month and caused a brief dip in the stock market.
- A fake video of Russian President Vladimir Putin declaring martial law and military mobilization aired on Russian media Monday, causing confusion as Ukraine starts to ramp up its offensive. It's unclear who was behind the video, but the Russian government called it a "hack."
But, but, but: Similar concerns about misinformation and the difficulty of telling fact from fiction emerged with the advent of the web in the '90s and again with the rise of social media.
- There was collateral damage in both eras, but media and democracy haven't yet collapsed as predicted.
- "We've been here before" — with photography at the turn of the last century and Photoshop in the late '90s — and learned to deal with it, said illusionist Eric Mead at the Nobel Foundation summit. In the long-term, he believes we'll also learn to deal with these new tools and "find our way into using them for our benefit rather than our ruin."
- But in the short term, he predicts a backlash and mistrust of digital life and a retreat to face-to-face communication. "For me, that's a very hopeful note."
The big picture: Data shows Americans were already frustrated and worried about misinformation even before the AI boom.
- An overwhelming majority (90%) of Americans ages 16 to 40 worry about deception and misinformation, per a 2022 Media Insights study. Furthermore, 70% of Americans in that age bracket say they "feel they personally have been victims of it."
AI-generated images and text are a target for policymakers around the world.
- The Biden administration introduced a slew of new actions to help responsibly manage innovation in AI last month.
- The President’s Council of Advisors on Science and Technology (PCAST) has launched a working group on generative artificial intelligence (AI) that will look into negative consequences of generative AI, like misinformation and impersonation.
- A wave of legislative efforts to regulate AI, including one requiring a disclaimer on images, video, text and other outputs of generative AI, is underway in Congress, Axios' Andrew Solender writes.