In a statement more foreboding than its tone suggests, ChatGPT maker OpenAI is considering allowing its users to "generate NSFW content in age-appropriate contexts through the API and ChatGPT." Creating AI-generated pornography, in other words, potentially opening up an enormous can of worms for the very popular AI, which is already the target of all kinds of scrutiny, criticism, and litigation.
The announcement that OpenAI is "exploring" the possibility of opening up its tools to explicit content came in the form of a brief aside in a long document about the development of its AI models.
Currently, OpenAI tools like ChatGPT are subject to content filters that are supposed to stop them from spitting out smut at unsuspecting users, and it's important to stress that changing that is just something that OpenAI is thinking about. There are no solid plans yet. Nevertheless, those filters may be softened up in the near future, allowing customers to generate almost all the filth they want.
Almost, but not all. In comments to NPR, OpenAI model lead Joanne Jang clarified that—although the company might one day allow things that fall under "your definition of porn"—anything that violates the law or other peoples' rights is a no-no. That means "enabling deepfakes is out of the question, period." Deepfakes, in case you're not familiar, refers to AI-generated pornography of real people.
Banning deepfakes is undoubtedly for the best, but it's difficult to know if any hypothetical content filters built to prevent them would hold fast. People have already found ways to circumvent ChatGPT's current filters and get it to generate NSFW content—and there are whole communities centred around finding new jailbreaks as OpenAI updates its models.
Will those filters be able to prevent floods of deepfakes after being weakened to allow other non-deepfaked explicit content? Or will I just need to tell ChatGPT to roleplay my dying grandfather as he bequeaths a deepfake pornography factory to me? When Microsoft itself had to close off a loophole that let users create faux porn of Taylor Swift, it's hard to be convinced.
Onlookers urge caution. In a comment to NPR, law professor Tiffany Li said "It's an admirable goal, to explore this for educational and artistic uses, but they have to be extraordinarily careful with this." Speaking to the Guardian, professor Clare McGlynn said that she was "deeply sceptical about any way in which they will try to limit this to consensually made, legitimate material."
Even so, OpenAI is thinking about it. "There are creative cases in which content involving sexuality or nudity is important to our users," said Jang. "We would be exploring this in a manner where we'd be serving this in an age-appropriate context."