What you need to know
- OpenAI has opposed a proposed AI bill that promotes safety measures and practices in the landscape.
- While the ChatGPT maker supports some of the bill's provisions, it claims regulation should be "shaped and implemented at the federal level."
- Former OpenAI researchers say the continued development of advanced and sophisticated AI models without regulation could potentially cause catastrophic harm to humans.
Amid claims that OpenAI is on the precipice of running into bankruptcy with projections of $5 billion in losses, the ChatGPT maker opposed a proposed AI bill (SB 1047) designed to install safety protocols to prevent the technology from veering off the guardrails (via Business Insider).
Privacy and security are major user concerns, prompting the immediate need for regulation and policies. OpenAI's opposition to the proposed bill has received backlash, including former OpenAI researchers William Saunders and Daniel Kokotajlo.
In a letter addressing OpenAI's opposition to the proposed AI bill, the researchers indicate:
"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems."
The letter claimed the ChatGPT maker develops sophisticated and advanced AI models without having elaborate safety measures to prevent them from spiraling out of control.
Interestingly, OpenAI seemingly rushed through the GPT-40 launch and reportedly sent out invitations for the event even before testing began. The company admitted the safety and alignment team was under pressure and left with little time for testing.
However, the tech firm says it didn't cut any corners while shipping the product, despite claims that it prioritized shiny products over safety processes. The researchers indicated the development of AI models without guardrails "poses foreseeable risks of catastrophic harm to the public."
AI regulation is crucial, but opposing forces are stronger
OpenAI CEO Sam Altman has blatantly expressed the need for regulation. According to the CEO, the tech should be regulated like an airplane by an international agency that ensures the safety testing of these advances. "The reason I've pushed for an agency-based approach for kind of like the big-picture stuff and not like a write-it-in-law is in 12 months, it will all be written wrong," added Altman.
According to the former OpenAI researchers, Altman's championing AI regulation might be just a facade as "when actual regulation is on the table, he opposes it." However, while speaking to Business Insider, an OpenAI spokesman stated:
"We strongly disagree with the mischaracterization of our position on SB 1047."
In a separate letter by OpenAI Chief Strategy Officer Jason Kwon to California Senator Scott Wiener (AI bill sponsor), the company highlighted several reasons it opposed the bill, including the recommendation that regulation should be "shaped and implemented at the federal level."
According to OpenAI Chief Strategy Officer Jason Kwon:
"A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards."
It remains unclear if the bill will eventually be passed into law or if the proposed amendments by the ChatGPT maker will be incorporated. "We cannot wait for Congress to act — they've explicitly said that they aren't willing to pass meaningful AI regulation. If they ever do, it can preempt CA legislation," the researchers concluded.
🎒The best Back to School deals📝
- 🕹️Xbox Game Pass Ultimate (3-months) | $31.39 at CDKeys (Save $18!)
- 💻Lenovo Yoga 7i 16 (Core Ultra 7) | $679.99 at Best Buy (Save $370!)
- 🎧Sony WH1000XM5 ANC Headphones | $299.99 at Best Buy (Save $100!)
- 💻HP Victus 15.6 Laptop (RTX 4050) | $599 at Walmart (Save $380!)