California lawmakers have voted to pass an artificial intelligence safety bill, known as SB 1047, which has sparked a divide within Silicon Valley. The bill, introduced by Sen. Scott Weiner seven months ago, was approved by the state assembly and now awaits Gov. Gavin Newsom's decision by September 30.
The primary objective of SB 1047 is to mandate companies that invest $100 million or more in training AI models to develop safety measures to prevent their technology from being used for harmful purposes, such as creating dangerous weapons or engaging in cyberattacks. The bill requires companies operating in California to report safety incidents to the government, protect whistleblowers, and allow third-party testing of their AI models for safety. In extreme cases, companies may be compelled to shut down operations.
The bill has triggered a rift in Silicon Valley, with prominent figures in the tech industry taking opposing stances. OpenAI's chief strategy officer and Meta have expressed concerns that the bill could impede progress and deter companies from operating in California. On the other hand, Elon Musk, a proponent of AI regulation, has voiced support for SB 1047, emphasizing the need to regulate potentially risky technologies for public safety.
Former OpenAI employees have criticized the company's opposition to the bill, citing concerns about the responsible development of AI systems. Amazon-backed Anthropic initially opposed the bill but later indicated that the benefits may outweigh the costs, albeit with some remaining reservations.
The debate surrounding SB 1047 reflects the ongoing dialogue on AI regulation and safety measures in the tech industry. As Gov. Newsom deliberates on signing the bill into law, the implications for AI development and regulation in California remain a topic of significant interest and contention among industry stakeholders.