California has taken a significant step in combating election deepfakes by implementing some of the strictest laws in the country. Governor Gavin Newsom recently signed three groundbreaking proposals at an artificial intelligence conference in San Francisco, making California a pioneer in this area.
These laws aim to prohibit the use of artificial intelligence to create and circulate false images and videos in political ads close to Election Day. However, two of the laws are now facing legal challenges through a lawsuit filed in Sacramento.
One of the laws allows individuals to sue for damages over election deepfakes immediately, while the other mandates large online platforms to remove deceptive material starting next year.
The lawsuit argues that these laws infringe on free speech rights and empower individuals to take legal action over content they disagree with. The Governor's office clarified that the laws do not ban satire and parody but require disclosure of the use of AI in altered videos or images.
Despite criticisms, lawmakers believe these laws are essential to prevent the spread of election disinformation and maintain public trust in U.S. elections. The most comprehensive law targets not only materials that could influence voting but also those misrepresenting election integrity, including videos of election workers and voting machines.
While the effectiveness of these laws remains uncertain, they could serve as a deterrent for potential violations. Public Citizen, a consumer advocacy organization, highlighted the challenges of combating deepfakes in real-time due to the slow nature of legal proceedings against rapidly disseminated fake content.
One of the laws signed by Newsom requires campaigns to disclose AI-generated materials after the 2024 election, emphasizing transparency in political communications.
Despite concerns about the constitutionality of these laws, California's proactive approach to addressing election deepfakes sets a precedent for other states grappling with similar issues.