Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Eleanor Pringle

OpenAI will pay you to join its 'bug bounty program' and hundreds have already signed up—already finding 14 flaws within 24 hours

Sam Altman pictured onstage at the New Work Summit in California (Credit: David Paul Morris/Bloomberg - Getty Images)

A major payout could be on the way for ChatGPT users—all they have to do is find serious bugs in OpenAI's large language model. Ethical hackers, technology enthusiasts, safety researchers and programmers could be in for the windfall payment thanks to San Francisco-based OpenAI's new "bug bounty program," which will pay out set amounts per vulnerability reported, with a minimum of $200 per case raised and validated.

It's part of what OpenAI calls its "commitment to secure A.I.", with increasing pressure being put on developers to pause the development of advanced bots in order to establish better safety parameters.

Announcing the scheme on its blog yesterday, OpenAI wrote: "We invest heavily in research and engineering to ensure our A.I. systems are safe and secure. However, as with any complex technology, we understand that vulnerabilities and flaws can emerge. We believe that transparency and collaboration are crucial to addressing this reality."

Ethical hackers can look for bugs in a range of OpenAI functions and frameworks—including the communication streams that share data from the organization with other third-party providers.

According to Bugcrowd—the site where users can sign up to OpenAI's bounty project— 14 vulnerabilities have already been identified by users at the time of writing, with the average payout sitting at $1,287.50.

The stream of "accepted' vulnerabilities and payments show most of the rewards are in the $200 to $300 bracket, however one sum of $6,500 has already been handed out. The blog says the program would pay a maximum of $20,000 for "exceptional discoveries" but offers little clarity beyond that.

It's also a quick turnaround to get the issues dealt with, with validation of the bugs flagged either being confirmed or rejected within two hours—on average—of the problem being raised. More than 500 people have already signed up for the program with many hoping to get on the "hall of fame" list for users who successfully identify the most pressing issues.

Rules of engagement

Unsurprisingly, OpenAI has set out a very strict code for how and where these hackers should be looking for vulnerabilities, and what they should be doing with the information once they're privy to it.

The program's overview—which is around 2,500 words long—outlines that incorrect or malicious content, for example, is not covered under the scheme.

Instead, hackers should be looking for authentication and authorization issues, as well as payment problems, OpenAI's application programming interfaces (API) and plugins created by OpenAI, to name a few.

It's clear the team lead by Sam Altman are not taking any chances with the aim of the project being misinterpreted, as some paragraphs in the program outline are preceded with: "STOP. READ THIS. DO NOT SKIM OVER IT."

The business has similarly set out 10 rules of engagement, which include keeping "vulnerability details confidential until authorized for release by OpenAI's security team" and the "prompt" reporting of vulnerabilities.

OpenAI's pledge

As well as posting the project on the hacking forum—also used by the likes of bank NatWest, clothing retailer Gap and jobs site Indeed at the time of writing— OpenAI has outlined what it will do with the information reported.

The program overview pledges to work closely with researchers to promptly validate reports, as well as remediating vulnerabilities in a "timely manner" and "acknowledge and credit" contributions to improved security—provided the individual reports a "unique vulnerability that leads to a code or configuration change."

The move to make OpenAI's "technology safer for everyone" comes after its headline product, ChatGPT, was banned in Italy over safety concerns. The issue has prompted questions over regulation by other European countries, echoing the open letter signed by thousands of people—including Tesla's Elon Musk and Apple cofounder Steve Wozniak—calling for a temporary ban on advanced large language development.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.