Bipartisan concern over AI-generated election interference has prompted a patchwork of laws across the country, as state lawmakers seek to blunt the impact of misinformation and keep deepfakes from overwhelming voters.
More than a dozen Republican- and Democrat-led states have enacted legislation this year to regulate the use of deepfakes – realistic fake video, audio and other content created with AI – in campaigns. The laws come amid warnings from the Department of Homeland Security over the ability of deepfakes to mislead voters and as questions remain over whether Congress can take meaningful action before November.
Florida, Hawaii, New York, Idaho, Indiana, New Mexico, Oregon, Utah, Wisconsin, Alabama, Arizona, and Colorado have passed laws this year requiring disclosures in political ads with deepfake content. While Michigan, Washington, Minnesota, Texas, and California already had laws regulating deepfakes, Minnesota updated their law this year to require a candidate to forfeit their office or nomination if they violate the state’s deepfake laws, among other provisions.
In states such as New York, New Mexico, and Alabama, victims can seek a court order to stop the content.
Violators of deepfake-related laws in Florida, Mississippi, New Mexico, and Alabama can receive prison time. Breaking the law could also lead to hefty fines in some states: in Utah and Wisconsin, violators can be fined up to $1,000 per violation, and in Oregon and Mississippi, fines can reach up to $10,000.
AI poses a unique challenge, as it evolves rapidly, making it easier for individuals to create deepfakes with minimal technological knowledge. The use of deepfakes in politics has raised concerns about the spread of misinformation and the need for robust regulations to combat their impact.
Arizona state Rep. Alexander Kolodin, a Republican, sponsored legislation allowing candidates to seek court orders to declare manipulated content as deepfakes. This move aims to empower candidates to counter false narratives that can quickly circulate online.
While some progress has been made at the state level, federal action on regulating deepfakes remains uncertain. Despite bipartisan support for legislation requiring clear labeling of deepfakes, Congress has yet to act decisively on the issue.
Agencies like the Federal Election Commission and Federal Communication Commission are being urged to regulate AI in campaign ads to prevent voter fraud. However, the pace of regulatory action at the federal level remains slow, leaving the task primarily to state initiatives.
As the 2024 election approaches, the need for comprehensive regulations to address deepfakes in political campaigns becomes increasingly urgent. State efforts to educate voters and train election workers on identifying deepfakes are crucial steps in safeguarding the integrity of the electoral process.