Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Input
Input
Technology
Matt Wille

This AI posted on 4chan for days before being unmasked

Yannic Kilchner trained his latest artificial intelligence experiment on three years’ worth of posts from 4chan, the extremely controversial forum known for its racism and bigotry. As you might expect, the resulting bot isn’t exactly kind.

Kilchner trained his chatbot, which he calls “GPT-4chan,” on approximately 134.5 million posts from 4chan’s /pol/ channel, a section of the site meant for posting “politically incorrect” takes. GPT-4chan’s training was successful in the sense that it’s very good at crafting politically incorrect posts of its own. Kilchner claims it’s “more truthful” than any other GPT out there.

Of course Elon spurred this —

Kilcher didn’t just come up with the idea for GPT-4chan out of the blue. He explains in his video — while wearing a pair of dark aviator sunglasses — that the experiment grew out of Elon Musk’s claim that there are far more bots on Twitter than the company’s official estimate of 5 percent.

Kilcher began to wonder just how much online discourse is driven by bots. And thus GPT-4chan was born.

Some trickery involved —

Kilcher didn’t just want GPT-4chan to be capable of writing its own 4chan posts — he wanted it to actually go ahead and post on the site. Of course, 4chan is smart enough to have a CAPTCHA system involved, making it quite difficult for nonhumans to complete the posting process.

But there is a way around the CAPTCHA system, and it’s not even all that difficult to access. For just $20, you can purchase a 4chan “Pass,” which, as Kilcher puts it, essentially makes you a “4chan god.” Pass holders need not complete a CAPTCHA challenge each time they post, and they’re allowed to use proxy servers whenever they’d like, too.

Success of some sort —

Kilcher had his bot post about 1,500 times on /pol/ during its first 24-hour period. Its posts did everything they were supposed to: commented on current events and called other users inflammatory names, for example.

That success proved to be short-lived, though. Soon 4chan members became suspicious of just how frequently the bot was posting. Some even created dedicated threads to unmask the anonymous user, which they thought might be a spy from some government organization. (Others did believe the bot to be a real poster, though, with one user pointing out the bot’s comments about its “wife.”)

Eventually, the bot had sown so much dissent in /pol/ that other users began accusing each other of being bots. Even now, with Kilcher’s bots entirely offline, discussions continue about the consequences of humans interacting with artificial intelligence. Kilcher, in closing, says that’s a “pretty good legacy for now.”

“This is the worst website in the universe,” one user wrote, “I’m not even sure I’m not a bot anymore.”

Of course, there are ethical implications to unleashing a purposefully bigoted chatbot upon the world, and some may find Kilcher’s experiment in poor taste. Perhaps what it reveals is that we could really use better technology to help us detect bots when they’re pretending to be human.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.