Hello, it’s tech fellow Andrea Guzman bringing today’s Data Sheet to you while David is off.
A few days ago, I opened up TikTok to the sound of Taylor Swift bemoaning her poor fans and saying she doesn’t care about her tickets costing more than $1,000. It was such an out-of-character statement for the pop darling riding on a wave of good press over her upcoming Eras Tour that it was obvious someone had used A.I. to produce it.
Whoever made the Swift deep fake did so for absurdity's sake, even titling the TikTok sound "taylor speaking facts." But the reality that someone could leverage generative A.I. to create new synthetic media for more nefarious purposes is worrying researchers and politicians.
The nonprofit group Partnership on AI spells out cases like these in its framework on responsible practices for synthetic media. It notes that techniques like "representing a specific individual having acted, behaved, or made statements in a manner in which the real individual did not" can be used to cause harm.
One way some A.I. researchers hope to mitigate malicious content is by opening generative A.I. systems to scrutiny so that more than just the company that produces the A.I. and its partners can test and discover shortcomings and biases. Though critics of open-source systems note that opening them to the public increases the likelihood that bad actors could manipulate the tools, and instead champion close A.I. research.
This debate heated up in recent days after Meta's powerful large language model was leaked to 4chan. The Facebook parent previously made it available to approved researchers and government organizations. But now it can be downloaded by anyone, tampered with, and deployed for however or whatever they wish. As Vice notes, this was "the first time a major tech firm's proprietary AI model has leaked to the public." (Despite the leak claims, Meta said it would not discontinue its open A.I. research practices.)
As the Verge explains, the advocates for open research think the leak will pressure A.I. developers into establishing safeguards. But for now, most of the biggest A.I. players are continuing their closed-door approach, only creating portals and chatbots for the public to use and interact with.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
Andrea Guzman