Hello and welcome to Eye on AI.
This week, Waymo co-CEO Tekedra Mawakana announced the self-driving vehicle company is now giving 100,000 paid robo-taxi drives per week across Los Angeles, San Francisco, and Phoenix. The number is double the 50,000 paid weekly rides the company announced just a few months ago in June and marks a significant milestone for the company. It’s also a significant showing of AI on our roads driving alongside humans every day.
Autonomous vehicles from robo-taxis to self-driving trucks are highly dependent on AI. Some—like Waymo’s—combine extremely detailed maps with rules-based systems and real-time machine learning to navigate the roads, while other makers are moving toward systems that rely even more on AI to assess a vehicle’s driving environment in real time. Not only does AI make self-driving technology possible, but autonomous vehicles are also a quintessential example of AI’s many challenges. For example, hallucinations and accuracy dominate conversations around the leading commercial AI models. But when it comes to self-driving cars, the accuracy is literally a matter of life or death.
Yet, self-driving cars are not a big part of today’s AI conversation, and at the same time, autonomous vehicle makers aren’t talking much about AI. While self-driving cars were held out as the ne plus ultra of AI a decade ago, Waymo’s website doesn’t mention AI even once, for example, only including one brief nod to machine learning toward the bottom. It’s a drastic difference compared to practically every other industry from enterprise software to education, health care, law, retail, advertising, and so on, where companies are essentially shouting from the rooftops about how they’re using AI. That contrast is striking because while many companies across sectors are rushing to adopt AI without a clear strategy because it’s the hot thing to do (and because investors have bid up the stock of any company claiming to be “AI-first” and saying a product is “AI-powered” has become the go-to way to market it), self-driving cars are a truly disruptive technology that actually depends on AI to function. So why aren’t self-driving car companies seizing the AI moment?
“It’s because of regulation. It’s a closed community,” Eran Ofir, CEO of Imagry, a company that creates mapless AI autonomous driving software, told Eye on AI, noting that “AI is happening every day on the road with hundreds of thousands of vehicles.”
Long before generative AI models like ChatGPT dominated the AI conversation and put AI at the top of the legislative agenda, autonomous vehicle makers were jumping through regulatory hoops to get permits to test and drive their vehicles on roads. They’ve had serious setbacks. Cruise was banned and had to recall its cars after an accident last year and today the General Motors-owned company agreed to an additional recall of 1,200 of its self-driving cars to address federal highway safety regulators' concerns over a "hard-braking" problem.
But, if one takes a longer view, the industry has actually made a lot of progress convincing regulators and the public to trust it, as Waymo’s milestone and continued expansion shows. Then came generative AI, a totally different type of AI model that opened up several new cans of worms. While many of the issues around generative AI—like copyright and deepfakes—don’t even apply to the AI systems that power self-driving vehicles, they've made AI a topic of heated debate and caused everyone from consumers to regulators to be more skeptical of the technology and its impact. In short, for the self-driving vehicle industry, attaching itself to the AI hype is more of a risk than a selling point.
“The discussions with the regulators are difficult, and they don't want all the bad effects and debate about whether AI should be restricted and used and stuff like that,” Ofir said. “No one wants that side of it to be imposed on autonomous vehicles.”
Another interesting distinction is that while the goal for self-driving vehicles is to make something we already do (driving) safer, generative AI is largely aimed at making tasks faster and easier, sometimes at the expense of safety. Generative AI has turbocharged scams, introduced new cybersecurity threats, given rise to tools that make it possible for anyone to create nonconsensual explicit deepfakes of other people, and overall leads us to believe things that are not real and distrust real things we see with our own eyes.
Overall, this leads to interesting questions about how we evaluate the risks and impacts of various AI applications—the very questions lawmakers trying to figure out how to regulate AI are grappling with. Are self-driving cars an AI technology, or is AI just a technology used in self-driving cars? Is it both? Does it matter?
Different types of AI can work very differently, achieve very different impacts, and create very different risks. One thing that remains clear, however, is that AI is already in use all around us—even driving in the next lane.
And with that, here’s more of today’s AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com