Asking for forgiveness, rather than permission, is Silicon Valley's favorite business model—from Uber's early days entering cities without seeking approval from local officials to the social networking companies' loose treatment of user data.
With the AI market booming, the forgiveness cycle is kicking into high gear once again.
Consider Google's latest AI imbroglio. On Thursday, the company published a lengthy blog post explaining why its new AI search—a feature that it automatically activated for all U.S. users this month, without any ability to opt out—was telling people to put glue on their pizzas and to eat rocks.
It turns out AI search isn't smart enough to recognize satirical and troll-y content that exists on the web (especially in online discussion forums like Reddit), Google acknowledged. As a result, the company is now limiting the amount of such content it includes in its AI search results.
The incidents "highlighted some specific areas we needed to improve," Google VP Liz Reid wrote.
Also this week we finally heard from Helen Toner, the former OpenAI board member who was ousted in the fallout of the Sam Altman crisis last year. (The board, as you'll recall, had briefly fired Altman for not being "consistently candid" in his role as CEO.)
According to Toner, one of the reasons the board lost trust in Altman stemmed from the launch of OpenAI's most popular product, ChatGPT, in November 2022. The board was never informed beforehand of the launch, Toner claims, and found out about it after the fact, as people were discussing it on Twitter.
None of these incidents are catastrophic—hopefully, no one was daft enough to add glue to their pepperoni pizza—but they underscore an entrenched behavior in Silicon Valley that shouldn't be glossed over at a time when we're trying to determine how much regulation to impose on the AI industry and how much to allow the industry to regulate itself.
There are signs that tech companies are acting more responsibly. In recent weeks, OpenAI has signed a string of multimillion-dollar deals with publishers such as Vox Media, The Atlantic, and News Corp. The deals allow OpenAI to train its large language models on the content of these publishers, rather than just scrapping it all off the web for free.
Of course, OpenAI is currently being sued by the New York Times for allegedly doing exactly that. Would any of these content deals be happening if OpenAI hadn't already been challenged for its behavior?
Alexei Oreskovic
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
Today's edition of Data Sheet was curated by David Meyer.