Hello and welcome to Eye on AI.
The big news over the weekend was that Elon Musk’s AI company, xAI, has raised $6 billion in a venture capital round that values the not-even-a-year-old startup at $18 billion. That puts xAI right up there close to Anthropic, the AI startup that is closely partnered with Amazon, in terms of both money raised (about $8 billion) and valuation (also about $18 billion), and well ahead of many other hot AI startups. Of course, it's still well behind the more than $13 billion OpenAI has raised from Microsoft and others, and the $80 billion at which it's been valued.
Musk raised the money from a set of usual suspects who have backed his other projects, including his purchase of Twitter. These include Saudi Prince Alwaleed Bin Talal’s Kingdom Holdings, Valor Equity Partners, Vy Capital, Andreessen Horowitz, and Sequoia Capital. Perhaps these investors see xAI, which is leveraging live data from X and being closely integrated with it, as a way of reviving the business prospects of the struggling social network. Whatever their motivation, what’s more interesting is what it says about Musk’s intentions: The billionaire is really serious about building a major AI contender.
Until recently, there’s been some doubt about this. For one thing, xAI’s only product to date, the open-source chatbot Grok, seemed more like political posturing than a serious commercial effort. Grok was born out of Musk’s contention that other tech companies were imposing too many politically correct guardrails on their AI chatbots. In contrast, he pledged to create “anti-woke” AI and he trained Grok on X posts and who knows what else. (This fit Musk’s libertarian streak and his apparently sincere belief that AI could become a dangerously powerful tool for anyone wishing to police not just speech but thought. The only way to guard against this, he has said, is for every person to have their own personal AI, free from any restrictions on the kinds of discourse in which it can engage.) The result was Grok, a chatbot that was marginally more provocative than its competitors—but which does not score at the top of the LLM leaderboard in terms of other capabilities, such as reasoning, translation, summarization, and the like. It was just an occasionally racist chatbot. This may suit Musk’s politics and brand, but it hardly makes him an AI pioneer.
Then there was the fact that Musk’s whole xAI project seemed largely driven by sour grapes. Musk seemed miffed that he wasn’t getting enough credit for having been instrumental in OpenAI’s founding—and bitter that he’d walked away from the startup in 2018 after losing a bid to gain more direct control over the lab. Musk has said he’s alarmed by the for-profit, product-focused direction in which cofounder Sam Altman has pushed OpenAI since Musk’s departure from OpenAI’s nonprofit board and by the fact that OpenAI, which was founded to prevent a single big tech company (Google at the time) from controlling superpowerful AI, has now become intimately bound to a single big tech company, Microsoft. Musk has also said he’s disturbed by the fact that OpenAI, once dedicated to being as transparent as possible about its research, now publishes few details about the AI models it creates. Musk has even sued OpenAI along with Altman and Greg Brockman, an OpenAI cofounder and president, claiming that they breached promises made to him when setting up what was initially a nonprofit AI research lab.
But Musk is vulnerable to accusations of hypocrisy, given that he seems interested in having xAI create products too. And with outside investors’ money at stake now, it's likely that xAI’s efforts will also be serving commercial ends, such as helping to power new features for X, and perhaps Tesla, too. One can’t help feeling that what Musk really resents is not OpenAI’s commercial turn or its lack of openness, but simply Altman’s success, especially since most of it has come from decisions he made after Musk parted ways with OpenAI.
Now, using the massive chip on your shoulder as the foundation for a company is not an entirely unheard-of path to business success. But as a mission to attract the best and brightest, it might not be so compelling. (Google DeepMind: “Solve intelligence, and then use it to solve everything else.” OpenAI: “Build AGI for the benefit of all humanity.” xAI: “Help repair Elon’s bruised ego.” Where would you rather work?)
Shortly after announcing the new funding for xAI, Musk got into a spat on X with Yann LeCun, Meta’s chief scientist and Turing Award-winning “godfather of AI.” Musk had used the news of the announcement to post a call for AI researchers to join xAI. LeCun pointed out Musk’s reputation for being an extremely difficult boss and Musk’s inconsistency in having signed the 2023 letter calling for a six-month pause in further AI development, while now pushing xAI to create superpowerful AI models. He also noted that Musk had said xAI’s mission was to seek the truth, while Musk himself endorsed conspiracy theories on X. Musk threw shade back at LeCun, implying he was simply doing the bidding of Meta CEO Mark Zuckerberg and implying LeCun’s days of doing cutting-edge AI research were behind him.
The spat got lots of attention. But it’s silly and misses an important point. All of the leading AI efforts are now closely linked to big financial interests—whether it is Microsoft, Google, Meta, Amazon, or X and Musk’s newfound funders. Can we really trust any of these companies to have humanity’s best interest at heart?
That’s the point former OpenAI nonprofit board members Helen Toner and Tasha McCauley made in an editorial they published over the weekend in The Economist. The two used last week’s Scarlett Johansson-OpenAI-voice controversy as a jumping-off point to say they remain convinced that the board had been in the right when it tried to fire Altman last November. They said the entire chain of events—which saw Altman reinstated as CEO and ultimately back on the board and, as the ScarJo incident shows, continuing to be “less than fully candid,” at least with the public—demonstrated that corporate governance structures and self-regulation were too weak to protect the public from AI risks. There was too much money at stake for any of the AI labs to ever put purpose ahead of profit, they argued. So what was needed was government regulation and oversight.
I agree. We desperately need a regulator with enough expertise and authority to look over the shoulder of these companies and ensure they aren’t building systems that pose extreme risks—whether that’s because they are too capable (supercharging cyberattacks, automating fraud, or making it easier to produce bioweapons, for instance) or not capable enough (providing dangerously inaccurate medical advice, for example.) The new Safety Institutes in the U.S., and particularly the U.K., are a step in that direction, but they need more power than they have currently. If that means slowing down AI development slightly or making it more difficult to release AI models as open-source software, that is a price worth paying.
With that, here's more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn