WASHINGTON—Curtis LeGeyt, president and CEO of the National Association of Broadcasters told a Senate Forum this week that while local broadcasters have historically been among the most trusted sources of news and information, the rapid dissemination of misinformation fueled by advances in generative artificial intelligence is creating new challenges.
“Broadcasters are committed to delivering trusted, fact-based local and national news and are investing heavily to ensure stories are verified before they are aired,” LeGeyt said during the Senate AI Insight Forum "Transparency, Explainability, Intellectual Property and Copyright.” “While many broadcasters are responsibly embracing AI tools for operational efficiencies, such as scripting commercials and first drafts of content for human review, AI presents challenges to the critical local journalism broadcasters provide.”
Calling the trusted journalism broadcasters provide “the antidote” to such AI-based misinformation, LeGeyt said such “uncontrolled advancement” of generative AI increases the costs of vetting stories and footage for local broadcasters and lead to copyright violations.
“The ingestion of broadcasters’ copyrighted news content in AI systems without authorization or compensation risks further diminishing reinvestment in local news,” he said. “AI tools use stations’ work product without compensation while at the same time broadcasters are being forced to devote more resources to fight the misinformation AI systems so easily proliferate.”
“Broadcasters’ expressive content is particularly valuable for AI ingestion precisely because it is vetted and trusted,” LeGeyt added. “If broadcasters are not compensated for use of their valuable, expressive works, they will be less able to invest in local news content creation.”
LeGeyt used the current violence in the Middle East as an example of the issues broadcasters face in verifying footage.
“After the terrorist attacks on Israel in October, fake photos and videos reached an unprecedented level on social media in a matter of minutes,” he said. “Of the thousands of videos that one broadcast network sifted through to report on the attacks, only 10% of them were usable or authentic.”
In addition to the difficulties posed by AI-generated misinformation to distinguish the difference between fake and real news, LeGeyt added several other concerns to the list.
Using generative AI in news production “increases the likelihood of legitimate, copyrighted broadcast content being ingested and then mixed with unverified and inaccurate third-party content, especially when the particular use wasn’t authorized in the first place,” LeGeyt said. “There is also particular concern among broadcasters about AI tools being used to create images, video and audio that replace the likeness of a trusted radio or television personality to spread misinformation or perpetrate fraud. The use of AI to doctor, manipulate and distort information is a significant and growing problem that must be addressed in balance with the First Amendment.”
LeGeyt cited numerous efforts from broadcasters to deal with misinformation, including CBS News’ “CBS News Confirmed” unit that investigates misinformation and deepfakes; Tegna’s VERIFY team, used across all of its 49 newsrooms to evaluate stories and help viewers identify misinformation; and Hearst Television’s partnership with FactCheck.org to produce segments combating misinformation for its stations across the country.
LeGeyt’s warnings come amidst the 2024 presidential election campaign in which AI-generated misinformation could have serious consequences for the country.
“According to the Pew Research Center, an overwhelming majority of Americans believe that “false information online” is a major threat to our democracy,” LeGeyt said. “According to Morning Consult, just 37% of Americans believe the upcoming 2024 election will be both honest and open, and nearly two-thirds believe that disinformation will influence the outcome.”