Hello and welcome to Eye on AI
Almost a week after the Senate’s first AI Insight Forum, the discourse about AI regulation is running hotter than ever. While the session was conducted behind closed doors, we do know a little about what happened: Elon Musk warned AI could threaten civilization; Bill Gates argued it could help address world hunger; and when Senate Majority Leader Chuck Schumer asked if the government needs to regulate AI, all of the executives present raised their hands. There were also debates about how AI will affect jobs, how bad actors could abuse open-source AI systems, and whether there should be an independent agency dedicated to overseeing AI.
The goal of all of this, of course, is for the Senate to work through how it might want to regulate this fast-moving technology. And while all of the tech executives in the room may have raised their hands in favor of regulation, there’s since been a chorus of takes from industry leaders about how regulation would stifle innovation—and threaten the United States’ position with China—that makes clear the industry would really prefer to continue running free.
As I write this, a top story on the popular tech news aggregator Techmeme is a blog post from investor and self-declared “short term AI optimist, long term AI doomer” Elad Gil, in which he argues the U.S. needs to let the technology advance and should not yet push to regulate AI.
“I do think in the long run (ie decades) AI is an existential risk for people. That said, at this point regulating AI will only send it overseas and federate and fragment the cutting edge of it to outside US jurisdiction,” reads the blog post, where he also makes the popular argument that regulation would favor Big Tech incumbents.
The chorus continued in my inbox. “Heavy handed regulations will choke our country's budding leadership in the AI sector and could have a lasting and negative impact on our ability to compete with foreign industry that is accelerating R&D with the support of their own governments,” Muddu Sudhakar, CEO at AI company Aisera, emailed Eye on AI via a representative after the hearing.
The innovation-over-all argument against regulation was perhaps most on display at the recent All-In Summit, where Benchmark general partner Bill Gurley gave a talk titled “2,851 Miles.” Noting 2,851 miles as the distance between Silicon Valley and Washington D.C., he declared, “The reason Silicon Valley has been so successful is because it’s so fucking far away from Washington D.C,” receiving a roar of applause and a standing ovation.
He was immediately joined onstage by fellow VCs for a discussion, where they proceeded to tear into the idea of regulating AI and laughed that regulation would lead to the government doing code reviews and forcing product managers to travel to Washington to get approval on new software features. Tech executives like Docusign CEO Allan Thygesen and Applied Research Institute CEO David Roberts later lauded the talk on LinkedIn.
As always, it’s important to keep in mind that these VCs—much like executives—have a vested interest in letting AI run wild. Benchmark bills itself as focused on AI startups, and many VCs have already made a ton of money in the space. But their innovation-over-all stance has also found some support in the Senate, which critics credit to Big Tech’s lobbying against AI regulation (or at the very least, their efforts to shape the regulation so it minimally affects—and perhaps even benefits—their incumbent positions).
In his opening remarks at the hearing, Texas Republican Sen. Ted Cruz rallied against regulation, stating that “if we stifle innovation, we may enable adversaries like China to out-innovate us,” according to a press release. And Sen. Roger Marshall, a Kansas Republican, had a similar takeaway, telling Wired after the hearing, “The good news is, the United States is leading the way on this issue. I think as long as we stay on the front lines, like we have the military weapons advancement, like we have in satellite investments, we’re gonna be just fine.”
While there’s no doubt that AI has major implications for national security, it also has implications for every other aspect of society and human life. AI is not just the future—it’s a deeply impactful technology that’s been testing current laws and sowing real-world harms for years, from upending copyright law and workers’ rights to cementing discriminatory biases into everything from policing technology to how home loans are approved.
VCs and executives have long centered “innovation” as a primary stakeholder. Throughout the tech industry’s rise to dominance, they’ve positioned stifling innovation as the worst-case scenario. And now rising tensions with China are adding more fuel to their argument.
And with that, here’s the rest of this week’s AI news.
But first...a reminder: Fortune is hosting an online event next month called "Capturing AI Benefits: How to Balance Risk and Opportunity."
In this virtual conversation, part of Fortune Brainstorm AI, we will discuss the risks and potential harms of AI, centering the conversation around how leaders can mitigate the potential negative effects of the technology, allowing them confidently to capture the benefits. The event will take place on Oct. 5 at 11 a.m. ET. Register for the discussion here.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com