It has happened many times already: Videos using artificial intelligence technologies purport to show powerful politicians saying or doing things they never said or did, like former President Donald Trump hugging Anthony Fauci or Sen. Elizabeth Warren saying that letting Republicans vote could undermine elections.
Both of those so-called deepfake videos have been widely debunked, but a subversive ad making an appearance in the heat of next year’s presidential election in a critical swing state could cause the kind of chaos lawmakers of both parties want to avoid.
A bipartisan group of lawmakers — led by Minnesota Democrat Amy Klobuchar, who chairs the Senate Rules and Administration Committee, and which also includes Sens. Josh Hawley, R-Mo.; Chris Coons, D-Del.; Susan Collins, R-Maine; Pete Ricketts, R-Neb.; and Michael Bennet, D-Colo. — proposed legislation last month that would ban the “distribution of materially deceptive AI-generated audio or visual media” about individuals seeking federal office.
The legislation would allow people who are the subject of such fake ads to sue the person or an entity that is responsible for creating and distributing them, though not an online platform if the ad is placed there. Nor would it penalize radio, TV and print news media that publish stories about such ads as long as they clearly specify that the ad in question is fake or the use of such techniques is parody and satire.
In the House, Rep. Yvette D. Clarke, D-N.Y., in May introduced legislation that would require distributors of political ads to disclose whether generative AI technologies such as ChatGPT were used to generate any audio or video in the ads.
At a Senate Rules committee hearing late last month, Klobuchar said she hoped the legislation to ban deceptive ads would move by the end of the year, because “given the stakes for our democracy, we cannot afford to wait.”
Senate Majority Leader Charles E. Schumer also appeared at the hearing and backed Klobuchar’s legislation, saying that in the absence of congressional action to prohibit deceptive ads, both Democrats and Republicans will be affected, adding that “no voter will be spared … no election will be unaffected.”
Civil rights groups, political consultants, free-speech advocates and lawmakers across the political spectrum have agreed that the use of AI-generated deceptive ads poses risks to the democratic process by misleading voters. The trouble, though, is figuring out where to draw the line on what constitutes deception, or how to enforce prohibitions. Even if deceptive ads can be banned in the United States, foreign adversaries could target U.S. voters, raising questions about the role of social media platforms in spreading these ads.
Klobuchar and Clarke wrote in an Oct. 5 letter to Mark Zuckerberg, CEO of Meta, and Linda Yaccarino, the CEO of X, formerly known as Twitter, asking what policies they require from creators to disclose AI-generated ads and how the platforms intended to alert users.
A spokesman for Meta declined to comment, and representatives for X could not be reached.
To complicate matters, some states, including Minnesota and Wyoming, are considering legislation that would prohibit the creation of AI-generated ads that impersonate real people.
Despite the bipartisan proposal by Klobuchar, Hawley and others, there are plenty of skeptics both inside and outside Congress.
Sen. Bill Hagerty, R-Tenn., a member of the Senate Rules Committee, said at the hearing last month that banning deceptive AI-generated audio or visual media as stipulated in the Klobuchar bill is a “vague concept.”
Broad prohibition on the use of AI to alter images could potentially exclude the use of photo-editing software that uses AI tools to make a person look younger than he or she is, Hagerty said. “Congress and the Biden administration should not engage in heavy-handed regulation with uncertain impacts that I believe pose a great risk to limiting political speech.”
How far is too far?
Some free-speech advocates say that political campaigns already use deceptive techniques either to promote their candidates or to weaken their opponents.
Even before the advent of AI tools, campaigns have used other methods to deceptively edit images, audio and video, said Ari Cohn, free speech counsel at TechFreedom, a nonprofit that focuses on internet freedom and technology.
“If you think [deceptive ads are] a problem then it would make sense to address it, whether it’s created by AI or not,” Cohn said in an interview. “I’m not sure it makes sense to address it only when an ad is generated by AI.”
Neil Chilson, a senior research fellow at the Utah State University’s Center for Growth and Opportunity, testified at the hearing that even old-fashioned makeup and studio lighting could be seen as deceptive use of certain tools to make a candidate look younger than he or she is.
Chilson also testified that AI tools could have uses such as translating a candidate’s message into multiple languages.
One way to craft legislation that would likely pass muster in courts may be to prohibit AI-generated ads “within two weeks of an election,” when voters and watchdog groups may not have time to uncover the truth about an ad, Cohn said. Legislation that would apply “as soon as someone is a candidate for federal office or is thinking about being a candidate is far too broad in scope” and could lead to frivolous lawsuits that could curb political speech, he said.
The goal of having federal legislation on the issue is not to ban all forms of AI-generated ads but to outlaw only those that are misleading, with the aim to “protect the political process from deception that undermines the integrity of the elections and people’s ability to make true assessments,” Maya Wiley, president of the Leadership Conference on Civil and Human Rights, said in an interview.
She cited a study by the U.S.-funded think tank Rand Corp. that found between one-third and one-half of a sample of people could not distinguish between authentic and deepfake videos. “All populations exhibit vulnerability to deepfakes, but vulnerability varies with age, political orientation, and trust in information sources,” the study from 2022 found.
Students and younger people are less vulnerable to deepfakes while adults and older people are more likely to fail to distinguish between real and AI-generated videos, according to the Rand study, titled “Deepfakes and Scientific Knowledge Dissemination.”
Wiley said she is concerned that layoffs by large tech platforms such as Meta, X and Google included many who worked in trust-and-safety roles that weeded out disinformation. Without adequate safeguards, the platforms once again could be used by foreign adversaries to target American voters with deceptive ads, Wiley said.
Creators of political ads and messages are equally concerned about the spread of AI-generated deceptive ads but want to make sure that the boundaries of what is and what isn’t acceptable are drawn clearly, said Larry Huynh, partner at Trilogy Interactive, a digital advocacy and advertising agency, and president of the American Association of Political Consultants.
Using AI tools to smooth out a background in an image or a video to make it look uniform, for example, is a benign use of such technology, Huynh said in an interview, but more sinister uses such as putting words in people’s mouths should be prohibited, he said.
“We should all be very concerned and wary about [deepfakes] because it isn’t just about misleading one voter. I think it is an affront to the democratic process,” Huynh said. “As a political consultant, if something hurts democracy, it hurts the business and it’s your livelihood. We do not want to participate in something that reduces people’s confidence in our elections.”
The post Drawing the line on AI-based deepfakes proves tricky for Congress appeared first on Roll Call.